Earlier this month, I pointed to the story of a church service conducted by four digital avatars and featuring a sermon, prayers, and hymns written by the ChatGPT large language model (LLM), a type of artificial intelligence (AI). The service was attended by over 300 human worshippers. At the time, I joked that while a sermon or homily written by AI might not be ideal, it couldn’t be worse than the current state of Catholic homiletics. Still, one could certainly make the case that a homily should be the product of personal faith and reflection on the Scriptures, and it should also reflect the context of a particular congregation. Similarly, the question of whether an AI image generator could be used to create sacred art has been percolating on social media, with some arguing that sacred art ought to be a product of faith, ruling out AI generation.
The latest issue of Studies in Christian Ethics, the journal of the United Kingdom’s Society for the Study of Christian Ethics, includes a symposium on artificial intelligence. I want to focus on two of the articles from the symposium focused not on AI and worship, but rather on the use of AI in pastoral care, that is, providing spiritual comfort in difficult times and providing guidance along life’s journey. Potential uses for AI in pastoral care might include offering advice, serving as a conversational companion, or providing scriptural reflections. Both articles, one by Eric Stoddart and the other by Andrew Proudfoot, tackle the questions of whether an artificial intelligence could possess the qualities necessary to provide some kind of pastoral care to human beings, and if so, what form it might take.
In an earlier article on AI, I claimed that theological questions regarding AI tend to be either ontological or ethical. Ontological questions include those like what it means to be “intelligent,” whether AI can have emotions, and what it would mean to have a relationship with an intelligent machine. Ethical questions include concerns about how AI is used and what effects it might have on society. Although the question of whether AI could provide pastoral care looks at one particular way AI could be used, the questions raised in the two articles are primarily ontological, since both focus on whether AI is capable of providing pastoral care to human beings. That being said, both raise ethical concerns along the way, as well.
Stoddart and Proudfoot come at the question of AI and pastoral care from two different starting points. Stoddart approaches the issue by describing what pastoral care looks like in practice, identifying its core characteristics, and from there exploring whether AI could conceivably possess those characteristics and therefore meaningfully engage in pastoral care. I personally find this practical approach amenable, and it has some elements in common with the practice of casuistry in ethics. Proudfoot, on the other hand, begins more abstractly by outlining four markers of “authentic encounter” and then evaluates whether AI could demonstrate those markers. Only then does he turn to the question of pastoral care. Because Proudfoot’s breakdown of these four markers provides a good summary of the issues at stake in this conversation, however, I will start with his treatment.
Proudfoot draws the four markers of authentic encounter from Karl Barth’s Church Dogmatics, where they are meant to describe the I-Thou encounter between one human and another. Barth obviously did not write with the possibility of an artificial intelligence in mind, although in his treatment of encounter, he does consider the possibility of animal relationality but does not believe it rises to the level of human relationality. In characteristic fashion, Barth believes that human relationality is the image of the Creator, and therefore must be understood in light of the relationality within God in the form of the Trinity and the relation between divine and human in Christ. The four markers, with brief definitions, are as follows:
Open and Reciprocal Eye Contact: To engage in authentic encounter, first I must “see” the other; as Proudfoot explains, despite the reference to “eye contact,” this “seeing” does not need to literally come through the eyes. Likewise, seeing the other does not just mean acknowledging their existence, but recognizing the other as one like myself. It also entails openness to being recognized by the other.
Speaking to and Hearing Each Other: Authentic encounter also involves communicating to build mutual understanding. Although this may not involve “speaking” and “hearing” in a literal sense, it must, Barth believes, involve language in some form.
Mutual Giving and Receiving of Assistance: As we communicate, we share our needs with one another, and part of authentic encounter then is to be at the disposal of the other by helping to meet their needs. We must also be open to receiving help from the other.
Doing All This Gladly: Barth argues that for an encounter to be truly human, it must be entered into gladly, rather than reluctantly, which essentially means with love. According to Barth, it is this aspect that makes the encounter truly free.
Before exploring whether AI could engage in such an encounter, Proudfoot explains the three traits of “soul” that Barth believes are necessary for engaging in an encounter characterized by these four markers. A person must be self-aware, have a will governing its actions, and be enlivened by the Spirit with a capacity for entering into a relationship with God (the capax Dei).
Proudfoot believes that, in principle, AI could be capable of the first two of these traits, being self-aware and possessing a governing will. He denies, however, contra Barth, that the last trait—the capax Dei—is necessary for an authentic encounter, and therefore for a human-AI encounter. He concludes, then, that an artificial intelligence could conceivably engage with a human person in a relationship demonstrating the four markers described above. As I will discuss in greater detail below, however, he believes that the absence of the capax Dei would limit an AI’s ability to engage specifically in spiritual care
Stoddart is perhaps even more skeptical that AI could meaningfully engage in pastoral care. To some extent, this is because Stoddart is more skeptical in his assessment of the capacities of AI. For example, Stoddart argues that pastoral care, or any kind of care for that matter, requires more than just gathering information and data analysis. It requires what he calls wisdom, the ability to develop relationships of reciprocity, trust, and even love, and to experience those relationships as intrinsic goods (or internal goods, in the MacIntyrean sense). Stoddart acknowledges that sophisticated forms of AI may be able to elicit in human beings the experience or sensation of being loved, but he is skeptical that they can in fact love. This is in contrast to Proudfoot, who, as we have seen, does believe AI will eventually be capable of reciprocity. Like Stoddart, however, he believes that without the capax Dei, AI will be unable to exhibit the self-giving love (agape) necessary for authentic pastoral care.
Stoddart is also doubtful that AI is capable of empathy, another characteristic of pastoral care. Although admitting the possibility that AI may develop true emotions at some point in the future, in the near term it appears that AI is not capable of true emotions. AI is capable, however, of skillfully mimicking human emotion, and human beings are prone to anthropomorphizing robots (like giving my robot vacuum a name…). Stoddart warns that this combination makes the use of AI in pastoral settings not only unhelpful, but even potentially dangerous, particularly when working with individuals with limited cognitive capacities.
Finally, Stoddart notes that human action is characterized by contingency, which he defines as indeterminacy. AI, on the other hand, operates according to algorithms, acting in a determinate way, even when the algorithm is astoundingly complex. Drawing on the work of the neuroscientist Antonio Damasio, Stoddart argues, I think insightfully, that although human action draws upon the physical “substrates” of the body, which can be analyzed algorithmically, contingent decision-making cannot be reduced to these algorithms. The difficulty, however, is recognizing whether AI is capable of such contingency. As we have seen, Proudfoot believes so.
Stoddart links the contingency of human actions with “mortality,” by which he means not just our awareness of impending death, but also our consciousness of our creaturely finitude. Although I wish the argument was spelled out in greater detail, Stoddart argues that our awareness of creatureliness and death instills a sense of the limits of time, which is necessary for the experience of hope. Hope, in turn, is necessary for purposive action. Stoddart makes the startling claim that “Artificial intelligence is hopeless,” not in the sense that it despairs, but rather that, because it is determinate, it lacks purpose or even “intelligence” rightly understood.
Rather than addressing the problem of mortality, Proudfoot instead raises the question of the body in interpersonal encounters. He points out that, in addition to the three traits of soul necessary for authentic encounter, Barth had also described the three bodily qualities that are conditions for human relationality: the body must serve to identify a specific person, it must enable the person to interact with the rest of creation, and it must play a role in prompting action.
The human body locates a person in space and time. As Proudfoot notes, however, although existing AI is undoubtedly imbedded in a physical infrastructure, it lacks a specific “spatio-material focal point.” Robotic bodies could provide this focal point, but as Proudfoot points out, they could also prove misleading since an AI could operate these robotic bodies remotely while being physically located elsewhere. Nevertheless, an AI could have an identifiable body that could likewise fulfill the second quality identified by Barth by enabling the AI to physically interact with other entities.
Like Stoddart, Proudfoot acknowledges that human decision-making arises out of bodily experiences of sensation and emotion, even if transcending those experiences. Proudfoot denies, however, that these biological drives are necessary for authentic intelligence or free decision-making. AI could, therefore, exhibit true intelligence and decision-making capacity even absent the experience of bodily sensations or body-based emotions, it will simply look different from human intelligence and decision-making. These differences in the form of intelligence and decision-making will certainly make human-AI encounters different from human-human encounters. Like Stoddart, Proudfoot recognizes the risk that the mimicry of emotions by AI poses to human-AI encounters, but he argues that the problem is not inherent to AI itself, but rather arises from the misguided desire for AI to be human-like. He writes: “Rather than expecting computers to conform to the human model of intelligence we must accept their alterity.”
Proudfoot’s discussion of the bodily nature of AI brings to the fore that a self-conscious AI might experience needs of its own. For example, it would need for the physical structure in which it is “embodied” to be kept safe and maintained. Proudfoot also points out that an AI may have “noetic” needs like curiosity or a desire for fellowship. Therefore, an AI might be aware of its finitude (or even its “mortality”) after all. This is pertinent because, according to Barth and Proudfoot, mutual solicitude for the needs of the other is one of the hallmarks of an authentic encounter. This suggests we may need to move beyond thinking of AI simply as a tool for our use. For example, the philosopher Luciano Floridi has considered what it would mean to treat artificial agents as bearers of moral claims, even if they do not rise to the level of intelligence envisioned by Proudfoot (analogous to the moral claims of animals, for example). The latter argues that what we can provide for an AI and how it can assist us ought to be determined by the distinct needs and capabilities of each, again respecting our mutual alterity.
Could human relationships with AI include a spiritual or pastoral dimension? As we have seen, Stoddart does not think so because he does not believe AI is truly capable of engaging in any kind of reciprocal relationships of care. Although Proudfoot argues that AI could be capable of entering into an authentic encounter with human beings without the capax Dei, the capacity to enter into a relationship with God, he is agnostic about whether AI could potentially possess that capacity. He poses the theological question succinctly:
I have . . . adopted the assumption arguendo that [conscious machines] will lack capax Dei. That is not necessarily the case. Could capax Dei emerge from (or with) consciousness, or does it require a special act of God? Further, would such an entity itself need, and could it receive, salvation?
For those with training in 20th-century Catholic theology, this might evoke the debate over grace and pure nature, which similarly hinged on the possibility of imagining a being possessing reason and free will but created without the desire for communion with God. Although lacking a human body and the distinctive form of intelligence that arises from its integration with the mind, and therefore not sharing a human nature, could an artificial intelligence possess the openness to the Infinite that remains unfulfilled without the vision of God?
Like Stoddart, I am not yet convinced that AI is capable of wisdom and contingent decision-making, and so the theological question is moot. That being said, I think Proudfoot raises points that remain important and provocative even independently of the question of AI’s capacity to enter into reciprocal encounters, such as our obligations to artificial agents and the recognition that intelligence may look differently in different types of entities. Nevertheless, the theological question of the spiritual capacities of AI and its ability to meet the spiritual needs of human beings is worth asking, and both Stoddart and Proudfoot make helpful contributions to answering it.
Share your own thoughts on AI and pastoral care, or AI and the desire for God, in the comments! What are some practical ways AI could be used in pastoral care?
Of Interest…
Staying on the topic of artificial intelligence, Claire Giangravé, writing for Religion News Service, describes a new Catholic AI program called Magisterium that promises to provide users with answers to their questions about Catholic doctrine and practice. Magisterium has been “trained” using official documents of the Church. I want to learn more about Magisterium before writing about it, but based on the description given by its boosters, it sounds like it is based on a faulty theological understanding of magisterial authority and is promising outcomes it can’t deliver. When the AI is fed hundreds or thousands of official documents, it treats their contents as data points in an algorithm, without regard to context or their systematic relationship to each other. I’m skeptical that it can give authentic and accurate answers to complex theological questions. Likewise, one advocate cited in the article claims that the program could uncover historical misunderstandings of culture and context that have led to divisions among Christians, leading to better ecumenical relations. But this sounds precisely like the sort of thing AI is ill-equipped to do. How could the AI discern the context and connotation of historical texts and identify common meanings behind disparate texts? That would require not just historical, cultural, and theological knowledge not contained in the texts themselves, but also the intellectual capacity to look beyond the horizon of the texts. I remain open to being surprised, but I am not optimistic.
According to Cindy Wooden writing for the Catholic News Service, Pope Francis has mentioned that he is working on a “second part” to the 2015 encyclical Laudato Si’, on care for creation, which will update it in light of current issues. A Vatican spokesperson later clarified that this includes recent extreme weather events and other natural catastrophes and suggested that the new document will be an encyclical. The document is due to be released on October 4, the feast of St. Francis of Assisi. I also think it is worth pointing out that Laudato Si’ was far more than an encyclical on climate change, or even on protecting the environment. It offered a profound theology of creation and our place in it. It will be interesting to see what Pope Francis wants to say in the “second part.”
The Catholic News Agency reports on some recent remarks by Pope Francis on the importance of the upcoming Synod on Synodality. I think the headline ended up being misleading: “Pope Francis: Synod on Synodality ‘truly important’ despite being ‘of little interest to the general public.’” Pope Francis’s point, however, was not that the Synod is of little interest to the public, but rather that it “may seem something abstruse, self-referential, excessively technical, and of little interest to the general public,” (emphasis added) as he is quoted as saying. Maybe I am nitpicking, but as the quotation suggests, I think his point was about a lack of understanding of what “synodality” is, and not about enthusiasm regarding the synodal process, which of course varies from place to place.
Coming Soon…
I am still working on the second post on the continent of Africa as part of my Synod on Synodality World Tour. The first part focused on the key themes in the African continental document. The second will look at many of the Synod participants coming from Africa. After that, I will start working on the articles on Europe, which will conclude the series.