The Vatican’s new document on artificial intelligence (AI), titled Antiqua et Nova, offers reflections on the nature of “intelligence,” in both its human and machine varieties, considers how AI will impact human relationships, and offers judgments on a host of ethical challenges posed by AI. Alternatively titled a “Note on the Relationship Between Artificial Intelligence and Human Intelligence,” the document was released to the public on Tuesday by the Dicastery for the Doctrine of the Faith (DDF), headed by Cardinal Víctor Manuel Fernández. As I noted last week, Fernández had revealed the imminent publication of the document in a recent interview with the National Catholic Register’s Edward Pentin, but otherwise there had been little notice of work on the document. That being said, the Vatican has been quite focused on AI and its implications during the pontificate of Pope Francis, and Antiqua et Nova is likely the culmination of those efforts.
Nearly two years ago, in an article on theology and artificial intelligence, I argued that the crucial philosophical and theological questions regarding AI could be distinguished into ontological questions and ethical questions. I wrote:
What I mean by ontological questions are those related to what it means to be an intelligent being, which include the previously-noted question of how to define “intelligence,” and the question of what it means to have a relationship with an intelligent machine. Ethical questions are those related to how the design, training, and outputs of learning machines impact human beings, society, and the natural environment.
In Antiqua et Nova, the DDF follows a similar approach. The first half of the document considers questions like the nature of intelligence, what it means for a machine to be “intelligent,” and the distinguishing characteristics of human intelligence like embodiment, relationality, and an innate desire for truth. The second half examines a variety of ethical issues raised by, or complicated by, AI such as social inequality, privacy, the rights of workers, and the protection of the natural environment.
In this article, I want to focus on the first half of the document while touching on issues in the second half when appropriate. In part, that’s because the second half draws heavily on prior statements by Pope Francis that I’ve previously discussed here at Window Light. Second, many of the ethical issues discussed in the second half of Antiqua et Nova might require an entire article of their own to discuss adequately!
In good scholastic fashion, Antiqua et Nova begins by defining terms, starting with “artificial intelligence.” It introduces the famous definition developed at an early workshop on the topic led by the computer scientist John McCarthy at Dartmouth College in 1956, when the idea of intelligent computers first seemed like a possibility, even if a future one: artificial intelligence is “making a machine behave in ways that would be called intelligent if a human were so behaving” (cited in #7). It’s worth pointing out, however, that this definition has also been contested, and McCarthy himself came to conclude that artificial intelligence should not be measured in terms of imitation of human intelligence.
While holding up humankind as the exemplar of what intelligence means, the DDF concludes that artificial intelligence is fundamentally different from human intelligence. While AI can perform certain functional tasks with acuity similar to, if not surpassing, that of humans, it lacks the breadth and depth of human intelligence:
In the case of humans, intelligence is a faculty that pertains to the person in his or her entirety, whereas in the context of AI, “intelligence” is understood functionally, often with the presumption that the activities characteristic of the human mind can be broken down into digitized steps that machines can replicate. (#10)
This focus on the performance of functional tasks by machines, even sophisticated ones, is commonly referred to as “narrow AI.” The document explains that “narrow AI” can be contrasted with “Artificial General Intelligence (AGI),” or “a single system capable of operating across all cognitive domains and performing any task within the scope of human intelligence” (#9), currently only a purely hypothetical possibility. Were AGI to be developed, however, the DDF contends, it would nevertheless embody a reductive type of intelligence: “[E]ven as AI processes and simulates certain expressions of intelligence, it remains fundamentally confined to a logical-mathematical framework, which imposes inherent limitations” (#31). This contrasts with the richness of human intelligence, even when human intelligence pales in comparison to the speed and scope of AI’s computational capacity.
Looking at the distinctive qualities of human intelligence, Antiqua et Nova starts by pointing out that the Christian tradition, drawing in part on the Western classical inheritance, has always insisted that humans are rational beings. This does not simply mean that humans are capable of performing certain mental tasks, however; rather, rationality is a quality of being that “shapes and permeates all aspects of human activity,” including “‘knowing and understanding, as well as . . . willing, loving, choosing, and desiring’” (#15, citing the DDF’s Dignitas Infinita).
The document goes on to state that human intelligence is driven by a desire for truth. Quoting from Pope John Paul II’s 1998 encyclical Fides et Ratio on the relationship between faith and reason, it continues, “[T]he desire for truth is part of human nature itself. It is an innate property of human reason to ask why things are as they are” (#21). This desire for the truth draws the human person to go beyond the physical, created world in search of God, in whom “all truths attain their ultimate and original meaning” (#23).
Although the document doesn’t make this entirely clear, the implication here is that, while AI may be able to precisely analyze data and generate output, it has no sense of the truth of things, and it certainly does not demonstrate true curiosity. This has been an important lesson learned from the widespread adoption of Large Language Models (LLMs) like ChatGPT and Copilot: although an LLM may provide a competent answer to a prompt, it does so not because it “knows” the answer or believes its response is true, but because it is imitating the myriad examples of human language use in its database. LLMs, like other forms of AI, lack the capacity to seek and recognize truth.
Antiqua et Nova also suggests that human knowing has a certain depth absent from AI, given that human knowing takes place against an infinite horizon:
Human intelligence is not primarily about completing functional tasks but about understanding and actively engaging with reality in all its dimensions; it is also capable of surprising insights. Since AI lacks the richness of corporeality, relationality, and the openness of the human heart to truth and goodness, its capacities—though seemingly limitless—are incomparable with the human ability to grasp reality. So much can be learned from an illness, an embrace of reconciliation, and even a simple sunset; indeed, many experiences we have as humans open new horizons and offer the possibility of attaining new wisdom. No device, working solely with data, can measure up to these and countless other experiences present in our lives. (#33)
In one of the most profound passages in the document, it explains that human intelligence unfolds through engaging with reality in the ongoing journey of one’s life:
This engagement with reality unfolds in various ways, as each person, in his or her multifaceted individuality, seeks to understand the world, relate to others, solve problems, express creativity, and pursue integral well-being through the harmonious interplay of the various dimensions of the person’s intelligence. (#27)
This growth in understanding is not merely “the mere acquisition of facts,” but a pursuit of the True and the Good (#29).

Antiqua et Nova likewise teaches that human intelligence is relational. At one level, this means that “[w]e learn with others, and we learn through others” (#18): human intelligence is expressed through “dialogue, collaboration, and solidarity” (#18). Perhaps more profoundly, to be human means to desire to know others and to be known by others.
Because of the limitations of AI, the DDF concludes that it is not possible for humans to develop an authentic relationship with AI, at least one comparable to a human relationship: “Authentic human relationships require the richness of being with others in their pain, their pleas, and their joy” (#58). The document adds that, even though AI may be able to simulate aspects of empathy, it remains incapable of demonstrating true empathy, a necessary component of authentic relationships (#61). I further explored this notion of an authentic relationship, and whether it is possible to participate in one with AI, in this article, where I drew on the work of the theologians Eric Stoddart and Andrew Proudfoot.
The document makes the interesting point that in many cases AI programs, particularly chat bots, can simulate human-like responses, making it sometimes difficult to know if one is talking to a human being or a machine (#59). It goes on to point out that the producers of AI software often take advantage of this difficulty, anthropomorphizing their products as a way of generating interest.
The DDF also warns against the opposite phenomenon, of reducing human relationships to interactions more appropriate for machines. They give a very relevant example: “Such habits could lead young people to see teachers as mere dispensers of information rather than as mentors who guide and nurture their intellectual and moral growth” (#60). The document further expands on this insight in a later section on the challenges of AI for education, a section that itself may be worthy of a later article in Window Light.
Although recognizing that AI can serve as a valuable resource by providing students with support or feedback on their work, education is fundamentally relational, and AI cannot replace the teacher-student relationship:
At the center of this work of forming the whole human person is the indispensable relationship between teacher and student. Teachers do more than convey knowledge; they model essential human qualities and inspire the joy of discovery. Their presence motivates students both through the content they teach and the care they demonstrate for their students. This bond fosters trust, mutual understanding, and the capacity to address each person’s unique dignity and potential. On the part of the student, this can generate a genuine desire to grow. The physical presence of a teacher creates a relational dynamic that AI cannot replicate, one that deepens engagement and nurtures the student’s integral development. (#79)
This insight could be applied even more broadly, for example, to online education.
Echoing the experience of many frustrated professors, Antiqua et Nova also notes that, with the spread of AI, students are tempted to use AI to “merely provide answers instead of prompting students to arrive at answers themselves or write text for themselves” (#82). Some of the leading voices on the use of AI in higher education have suggested that educators should steer students toward more responsible uses of AI. I have thought to myself that one of the reasons why the (mis)use of AI among college students is so widespread and pernicious, and why this call for the responsible use of AI is so challenging, is because traditional-age college students are reaching precisely the stage of intellectual development in which we grow from thinking of learning as “finding the answer” or performing an assigned task correctly to understanding that learning is a process of critical thinking and discovery. College students are in the process of learning that teachers are not merely “dispensers of information” but more “mentors who guide and nurture intellectual and moral growth,” to use the words of the DDF, and some never completely reach that stage of maturity. AI, and particularly LLMs, provide students with a convenient excuse to remain comfortably in the earlier stage of development and to avoid the hard work of intellectual growth. I think Antiqua et Nova supports this diagnosis!
The document provides another context in which AI cannot replace human relationships, even if it may be of great benefit in other ways: healthcare. It states:
[I]f AI is used not to enhance but to replace the relationship between patients and healthcare providers—leaving patients to interact with a machine rather than a human being—it would reduce a crucially important human relational structure to a centralized, impersonal, and unequal framework. (#73)
The DDF further concludes that decisions regarding patient well-being should never be made by AI, but always by a human person (#74).
The third way in which human intelligence differs from AI is that human beings are embodied. The document states: “[T]he intellectual faculties of the human person are an integral part of an anthropology that recognizes that the human person is a ‘unity of body and soul’” (#17, citing the Second Vatican Council’s Gaudium et Spes). It goes on to explain that our embodiment is linked to the way human intelligence grows throughout our lives, which I mentioned earlier:
Human intelligence . . . develops organically throughout the person’s physical and psychological growth, shaped by a myriad of lived experiences in the flesh. Although advanced AI systems can “learn” through processes such as machine learning, this sort of training is fundamentally different from the developmental growth of human intelligence, which is shaped by embodied experiences, including sensory input, emotional responses, social interactions, and the unique context of each moment. These elements shape and form individuals within their personal history. In contrast, AI, lacking a physical body, relies on computational reasoning and learning based on vast datasets that include recorded human experiences and knowledge. (#31)
Antiqua et Nova briefly mentions that some have imagined that AI could provide a means of escaping or transcending the limitations of the human body (#9), but here the DDF makes the case that in fact human embodiment gives us an advantage over machines.
All that being said, the DDF avoids the fallacy of thinking that AI exists in some ethereal, immaterial reality, and indeed warns about “the way this technology is presented in the popular imagination, where words such as ‘the cloud’ can give the impression that data is stored and processed in an intangible realm, detached from the physical world” (#96). This concern is raised in the midst of a discussion of the environmental harms caused by the infrastructure needed for AI to function, including the consumption of energy and the use of water (#96).
I haven’t touched on every theme raised in Antiqua et Nova—for example, it has some interesting reflections on how the stewardship of the created world, including the development of technology, is one of the tasks of human intelligence, and therefore we ultimately ought to think of AI as a product of human intelligence (## 24-25, 35). Likewise, I skipped over several of the ethical issues raised in the document.
I want to close by noting elements of Antiqua et Nova’s intellectual lineage. As I’ve already mentioned, it heavily draws on Pope Francis’s earlier statements on AI, and several times appeals to the Second Vatican Council’s Gaudium et Spes and Pope John Paul II’s Fides et Ratio in developing its account of human intelligence. It’s understanding of the human intellect also relies on the thought of St. Thomas Aquinas, who is likewise cited. Indeed, at the very end, the document specifically mentions that it was officially published on the feast of St. Thomas Aquinas, January 28.
Antiqua et Nova also evokes other earlier teachings. For example, its claim that AI exhibits a limited form of rationality compared to the breadth and depth of what human intelligence is capable of reminded me of Pope Benedict’s reflections (for example, in his controversial Regensburg Address) on how reason, absent the broadening horizon of faith, is not only incomplete, but even impoverished. Antiqua et Nova’s treatment of the depth of human intelligence also echoes Pope Francis’s recent remarks on the discipline of theology (which I commented on here), in which he stated:
The desire is this: that theology help to rethink how to think. Our way of thinking, as we know, also shapes our feelings, our will and our decisions. A wide heart is accompanied by a wide-ranging imagination and thinking, whereas a shriveled, closed and mediocre way of thinking is hardly capable of generating creativity and courage.
The human intellect is capable of this wideness because we have heart (an allusion to Francis’s encyclical Dilexit Nos, on the Sacred Heart of Jesus). One might add that, when human beings adopt the limited type of intelligence exhibited by machines as their own—a temptation that, as we have seen, Antiqua et Nova recognizes as real—our imagination becomes shriveled and closed. Indeed, Antiqua et Nova also refers to Francis’s encyclical Laudato Si’ and its warning against the “technocratic paradigm”, precisely this reductive way of thinking when it is applied to our relationship with the natural world and with one another. It cautions that the widespread use of AI can foster the technocratic paradigm by tempting us to think of every problem, no matter how complex, as solvable by technology alone (#54).
Antiqua et Nova’s account of human intelligence also evokes great Catholic thinkers form the recent past like Pierre Rousselot, S.J., Maurice Blondel, Karl Rahner, S.J., and Bernard Lonergan, S.J., although none are explicitly mentioned (the late psychologist and economist Daniel Kahneman and his popular book Thinking, Fast and Slow are cited in a footnote, however!). The document also echoes more recent theologians who have explored the quandaries posed by artificial intelligence for many years, like Noreen Herzfeld, Ilia Delio, O.S.F., and the various scholars involved with the AI Research Group co-sponsored by the Vatican and Santa Clara University’s Markkula Center for Applied Ethics.
If I could change one thing about Antiqua et Nova, it would be to add a section on the divine intellect, or God’s intelligence. After all, as Thomas Aquinas, and other great theologians of the Western tradition like Augustine, Bonaventure, and John Duns Scotus would all acknowledge, human intelligence is itself a participation in and reflection of the divine intelligence. As the document itself notes, citing Gaudium et Spes, the human person “shares in the light of the divine mind” (#17). A reflection on the divine mind, even if brief, would have been helpful. Even so, the document is deeply theological and very relevant for our contemporary context.
Of Interest…
The Christian ethicist
has also written a commentary on Antiqua et Nova, praising the intellectual rigor the Vatican brought to the document and noting the significance of an influential religious institution like the Catholic Church speaking on the issue of AI. He also devotes more attention to what the document has to say about AI and moral reasoning than I was able to cover here, so for these and other reasons, it is worth taking a look.Over the past few weeks, I’ve written several times on how the US Catholic bishops should respond to Trump administration immigration policies, including this week’s round-up of responses by the bishops to Trump’s executive orders. At Commonweal, the philosopher Terence Sweeney makes a similar plea for the bishops to speak up, drawing heavily on the Church’s tradition of teachings on immigration.
Speaking of finding one’s voice, on Tuesday of this week, Cardinal Timothy Dolan, the Archbishop of New York, criticized Vice President J.D. Vance’s recent remarks accusing the Catholic Church of providing services to refugees of being motivated by financial gain. Dolan said of Vance’s comments, “That’s just scurrilous. It’s very nasty, and it’s not true.” Dolan’s response is noteworthy because he not only led the invocation prayer at President Trump’s inauguration on January 20, he also seems to take a rosy view of the president’s character, claiming that Trump “takes his Christian faith seriously” and suggesting that “something mystical” happened to Trump when he was nearly assassinated last summer. Hopefully Dolan’s more recent remarks suggest that he is waking up to the fact that the Trump administration will not be a faithful friend to the Catholic Church, although he added that he thought Vance’s comments were uncharacteristic, and that he’s “a guy who has struck me as a gentleman and a thoughtful man, . . . from whom I’m still expecting great things.”
This is excellent. Thanks. The ontological issue needs to be placed up front, and this does it well.