In the latest episode of Ethics on Call, hosts Dan Daly, Executive Director of the Center for Theology and Ethics in Catholic Health, and Tom Bushlack, Senior Director of the Center, discuss Antiqua et nova, a January 2025 doctrinal note on artificial intelligence issued by the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education. Dan and Tom discuss the document's purpose and main arguments and connect it to Catholic health.
In the latest episode of Ethics on Call, hosts Dan Daly, Executive Director of the Center for Theology and Ethics in Catholic Health, and Tom Bushlack, Senior Director of the Center, discuss Antiqua et nova, a January 2025 doctrinal note on artificial intelligence issued by the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education.
Dan and Tom discuss the document's purpose and main arguments and connect it to Catholic health.
Tom Bushlack, Ph.D. (00:09):
Hello and welcome to Ethics On Call, the official podcast of the Center for Theology and Ethics and Catholic Health. I'm Tom Bushlack.
Dan Daly, Ph.D., S.T.L. (00:18):
And I'm Dan Daly.
Tom Bushlack, Ph.D. (00:19):
And for those of you who might be watching this live on YouTube, you'll notice a bit of a change. We're here in the studio together in St. Louis, recording in person, and we're going to keep working on the tech to make this as engaging and exciting for all of you at home as possible. Today we're going to talk about Antiqua et Nova, a Latin title for a document called A Note on the Relationship Between Artificial Intelligence and Human Intelligence, published by the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education on January 28th, 2025. So we're really glad that you're here joining us today, listening or watching at home, and we're going to hear more today from our executive director, Dan Daly as we dive deeper into the note Antiqua et Nova.
(01:09):
A brief commentary, Dan, that you wrote upon its publication, and we will put links to both the original document and your commentary in the show notes for folks to read at home. And then we're really going to use that as a starting point for a broader theological ethical conversation about the implications of AI in Catholic health care, but even in society and culture. So Dan, to get us started, why don't you tell us a little bit more about the document and the context around its publication, and then we can jump in for a deeper analysis from there.
Dan Daly, Ph.D., S.T.L. (01:45):
So thanks, Tom. And so as many of you may know, Pope Francis has been a leader on this issue for years. The document was really written to systematize a lot of this thinking. So if you look at the document, there are 215 footnotes, about half of which are citing a document that Francis wrote. So it's deeply reflective of Francis's approach to AI, his approach to technology, and it really brings it together in a higher level document to really educate the Catholic laity, bishops, priests, but also the public on the issue of AI. And the primary issue that the document is intending to address is, and it's just as when you read the title, it's this, the relationship of artificial intelligence and human intelligence. And really as I read the document, it's trying to correct a misunderstanding and the misunderstanding is of what human intelligence is.
(02:44):
So there's an analogy that's being drawn between artificial intelligence, me, human intelligence on the one hand, and artificial intelligence on the other. That analogy ultimately fails because it's a misunderstanding of human intelligence. It understands human intelligence just as the kind of behaviors that human beings externalize when they do something that is rational or intelligent human intelligence, the document's going to argue is so much richer than what an AI can do. If an AI can, you can ask it a question and it can spit back a response based on a large language model, that response that's being spit back, a human being can do that and an AI can do that. Is that really what we mean by intelligence though?
(03:34):
And that is really the crux of the piece is to correct an understanding, a misunderstanding. The first misunderstanding is of what human intelligence is, and then really trying to get us to a better understanding of what artificial intelligence is. But as we find in the document, Francis, but also the writers of the document, they don't really like the term artificial intelligence because again, the analogy seems to fail. It really is not anything like what we mean by human intelligence. So we need to get a better understanding of both human intelligence and artificial intelligence. And that's really the core of the piece.
Tom Bushlack, Ph.D. (04:18):
Yeah. I think parsing out that distinction because you used the term that artificial intelligence feels and seems like human intelligence and parsing that difference I think is probably going to be really core to what we talk about and what the document is about. I do want to clarify one thing at the very beginning, just for folks listening that this though the footnotes are citing heavily from Pope Francis. This was not written by Pope Francis. I don't know if you want to clarify that. I did want to make sure people understand that.
Dan Daly, Ph.D., S.T.L. (04:49):
Yeah, so I mean the document was signed by Francis, so it's a part of, so it was signed by Francis and given an audience to Francis. So he endorses the document, but it was written by these two different dicastries, which essentially just Vatican offices as you know. But really what they do is they are drawing so deeply on Francis' thought that you can consider this to be somewhat of a summarization, right, by others of his thought. There's certainly some novel work in here, but Francis's thought plays incredibly heavily on the document.
Tom Bushlack, Ph.D. (05:26):
And I definitely want to get back to that core point of human intelligence and artificial intelligence or augmented intelligence, which I know is the preferred term, not only in this document, but even some people who use this in Catholic health care to maybe set us up more because that gets heavy, that's more philosophical and theological and we want to go there. Can you give us just an overview of the structure of the document? What is it basing its approach or argument on? And I think we can start to unpack it a little bit more from there.
Dan Daly, Ph.D., S.T.L. (05:59):
So I mean, there are a number of different sections, but really the first thing the document does is, as I said earlier, it goes through that analogy of human intelligence and artificial intelligence. Then it moves into some, so that's really, you could say that's a section more on theological anthropology than Catholic. What does it mean to be a person in relation to God, a person who's been created by God? Then it gets into some special areas, and one of those areas is importantly for our audience is health care. It looks at ward, it looks at education, but it looks at health care among other areas. And so that if you could break it down into two different parts, it's that kind of basic fundamental theological anthropology and then how that gets applied. So certainly there's a lot of ethics in the document because that, as you know, theological anthropology is the grounding point. It's the touch point for Catholic ethics. We cannot figure out what person should do or who they should become if you don't know who they are, how they've been created, what their purpose is, what it means to be human. And so those two parts work together. So that's kind of a rough sketch of the overall outline.
Tom Bushlack, Ph.D. (07:17):
Yeah, no, I think that's helpful just to play that out at every game for folks who haven't read it or who have it on their reading lists, on their nightstand. And if you don't go ahead and do that now. So let's get into the meat and potatoes then, because the core of this, as you've outlined it as the document outlines it, is what do we mean by intelligence, specifically human intelligence, and then to the extent that we can even say there is such a thing as artificial intelligence, what is human intelligence? What's the distinction? So what's your take on maybe just starting with what's your take on what the document is saying about that? And then of course there's lots of other conversations happening around this that we can bring into that dialogue.
Dan Daly, Ph.D., S.T.L. (08:00):
So what the document does, I mean the document gets into theology here. We're both tomists, so you'll appreciate this. So they turn to Aquinas and what does it mean? What is the intellect really? What is reason really? And they turn to Aquinas. Aquinas says, look, there are two different faculties that are kind of under this broad umbrella of the intellect, the Intellectus and the rato. And the intellectus is that kind of, it's the aha moments where you receive new insights almost as a gift. You may be even thinking about something, but it's when you're on that walk with your dog and the light goes off, oh it's, it's not something that's laborious, but that's the intellect. It's the mode of new insight and discovery. The ratio is that we get the word ratiocination, which really gives you the understanding that you're working. You hear this kind of discursive over and over thinking about something. It's laborious. And these work together and they work together. Importantly Tom in a whole person.
(09:07):
So for Aquinas, for the tradition, the intellect, the reason these are not disembodied, these are not kind of these abstracted aspects of our being, but rather the intellectus and the ratio, the two parts of our intellect, essentially these are embodied. We receive information through the world. We're very tactile. We're sensing creatures. We are our bodies. We don't have bodies, we are our bodies. We're as much our bodies as we are our souls. So much so that our bodies are resurrected. I mean, that's a wild land belief of the faith, but that's how much you are your body, how much I am my body. We will always be embodied and we learn through the body. We also learn through the passions, through the emotions, through our feelings. And the document really wants to emphasize that as well is that we're embodied. We learn through the passions, through our feelings, and also through our relationships.
(10:14):
And maybe most importantly maybe we learn through our relationships the most, our relationship with God. So by receiving grace, by cooperating with God's grace, but also through others in our relationships with teachers, with parents, with children, with friends, with people we meet on the street, we learn through our relationships and all of this holistic understanding of how human beings come to know, informs the perspective on what the human elect is. It is not simply that we can do math in an abstract way, that we can make statements, but rather it involves the whole of our person. And further it goes beyond artificial intelligence is really good at being given a problem and helping to solve a problem. And I'll back up for one second. The document does not say that artificial intelligence is bad. It does not say that it should be rejected. It follows along Francis' line of thinking where he said, look, it's a hopeful and fearsome tool. We have reason to hope that it can aid human wellbeing. We also have reason to fear what it's going to do or what it could become. So AI is not to be rejected, but it's rather to be better understood. And the first understanding is that it's not human intellect.
(11:46):
It cannot contemplate beauty. It can't contemplate those transcendent values of goodness and beauty in the way that we can, the meaning of life, the purpose of life. It does not have a soul, it does not have a spirit. All of these parts, all of these of our humanness lead to and play upon our intellect in a way that any machines can't. It's impossible for a machine to do these things.
Tom Bushlack, Ph.D. (12:17):
It's interesting when you started, there's so much in there that we have to equate what we want to unpack. And it was beautifully said talking about the intellect and the way in which humans come to know you're speaking and the document is speaking out of a theological anthropology. But I read a lot of hardcore materialist scientists and listen to podcasts, and I think even from that perspective, they could have that same appreciation of how insight into physics happens or the understanding of the material world. It's like we do the ratio, the critical thinking in the work, and then we step back and have these insights or like aha moments and they're deeply embodied and deeply relational. And as you were talking, I hadn't thought about this before. I think you're right that two of the dimensions of human intellect that are absolutely essential are the embodiment component and the relational component, relational in terms of the created world, but human relational and human nature relational.
(13:29):
And I was thinking about how in quantum physics, and this is not in the document at all, this is me riffing a little bit, but we've basically proven that the person doing the observing even in science, affects what is observed and the way we understand reality. So there's something about just knowing that is relational, even outside of a theological framework. But then of course we bring this theological framework in, and then you kind of went there at the end. But it's almost like from the Christian, Catholic, biblical worldview, those embodied relational dimensions are there almost to serve a further purpose in that it's that contemplative dimension of knowing. So can you say a little bit about what is so distinctive about human intellect that leads to that? And then I think we can start to dive into what's different about AI.
Dan Daly, Ph.D., S.T.L. (14:28):
Yeah, so I think certainly you talk about the contemplative aspect of knowing. I mean this touches upon something we've talked about before as well. Discernment. We come to know things about ourself or God or others through processes of discernment, which are very embodied. They're very contemplative and spiritual. You think about Ignatius's processes of discernment. You can think about Francis who wrote on discernment, catechesis on discernment a number of years ago. AI doesn't discern, it doesn't take time with ideas. It doesn't create space for God's grace to work upon the person, her intellect, his intellect, her spirit, his spirit. I mean, what really, I think the overall point that this piece is getting at is we need to get away from a reductive account of the human person where we reduce the human person to certain behaviors. And this is very modernist, this kind of modernist approach that you, there's nothing behind the person.
(15:48):
It's really you are your behaviors and maybe even your behaviors are determined by your biology, by your genetics, by your social structures. It's completely deterministic. There really is nothing behind your eyes. It's just what you say and do is your reality. And this account is pushing back upon that and saying no, to be a human being is to be an embodied soul in relation to God, really the best definition of what it means to be human, where we're embodied souls in relation to God and others. And that I think is just, that's a piece that when people talk about AI and that AI is so human-like and look at all the things it can do, and it can be as empathetic or more empathetic than a person, or at least appear to be right or appear to be. That's a misunderstanding of what it means to be an empathetic person.
(16:43):
That's a misunderstanding of what an empathetic action is and what the virtue of empathy is. That virtue comes from a place of deep, relational, emotional, the wellspring is deeply emotional and relational. There is no empathy without feeling, with another, feeling, their pain, experiencing it for yourself, and then acting in a way, usually we, hopefully then we act in a way to alleviate the suffering,
Tom Bushlack, Ph.D. (17:14):
Which is that core virtue of solidarity that we talk about a lot in our work and in Catholic social teaching. That's right. That, yeah, AI. I think if you could almost have a soundbite as we're talking about this, it's the distinction between wisdom and algorithm, right? That's a great why human embodied, relational being can work towards, acquire and even receive through grace, wisdom and an algorithm. One of the ways, I know you've done more research in this area than I have, but I know one of the ways that AI has talked about is as it's a mirror. So AI is not self-generating. It can do novel things that almost appear self-generating, but ultimately the AI is only as good as the large language models that we feed it. And so in that sense, it mirrors to us, which is why you can, as a friend of mine actually did, we had this conversation, he was feeling kind of sad one night and it was late and he started asking chat GPT, basically therapy type questions. And he read the answers to me. I was like, I actually feel kind of good after reading that.
(18:31):
But it was only able to do that because it had been fed certain language models that that's mirrored empathy. So I think you did an excellent job of articulating what human intelligence is from our theological anthropology. How would you describe what, there's two different angles, what AI is and or what it is not. How do we, because I think even for people just on a day-to-day basis just interacting with Chat GPT or something like that, or responding to a chatbot on a website, the line gets blurry and that's part of the challenge of ai. So how can we parse that difference?
Dan Daly, Ph.D., S.T.L. (19:14):
Yeah, it's a great question. And in fact, the document talks about this, it Antiqua et Nova basically argues that AI reflects human intelligence. It does not possess intelligence.
(19:27):
It reflects agency, but it does not itself possess agency. And that's, if you look at the language that people use around this, and again, the phrasing of artificial intelligence or talking about an AI agents that is only loosely analogous. And again, I mean the document basically suggests, the analogy fails as you know, analogy works when it aluminum more than it obscures. That's what analogies do. They're never perfect, but do they illumine more than they obscure? And at Siqua, Nova is essentially arguing the analogies that we are using to talk about AI obscure more than the Illumina. It's not like us. It is a machine. It's a program that we have created. It is a product, not a person. You mentioned the AI mirror understanding. There's a great book that I recommend people read if they have a chance. It's entitled the AI Mirror, and it's by a woman by the name of Sharon Shannon Fowler. And the argument is essentially that it mirrors back to us who we are. It is no better than us. It has all of our virtues and vices, all of our foibles, all of our biases and prejudices are baked into ai. And I could give you a laundry list of examples where that's how many of the listeners will know how AI has spits back, why they're racist or sexist outputs To this day, it's still doing this and the news doing this.
Tom Bushlack, Ph.D. (21:09):
I thought this was going to save us from our floss. Are you crushing my hopes here?
Dan Daly, Ph.D., S.T.L. (21:17):
Well, Tom...
Tom Bushlack, Ph.D. (21:18):
I was facetious, but it's actually a real point.
Dan Daly, Ph.D., S.T.L. (21:21):
It is. I mean, if you read some of the evangelists, and I use that term intention, I think they are evangelists of this technology. They think it's our salvation. They think it's going to Sam Altman, who's the CEO of OpenAI, so very powerful company, one of the leaders. Yeah, I've got the leader in this space. Altman wrote a piece in June of 2025 in which basically he says, look, there are going to be some problems with this, but let's understand that all human progress essentially is scientific. And that ultimately what AI is going to do for us is give us, and this is a direct quote, better stuff. He sees the threats of ai, the problems and challenges that AI imposes are they're sidelined. And basically what he's doing is amplifying all of the possibility of ai. And he sees it as somewhat of a savior for humanity. But as Valor teaches through her book, the AI mirror, it is no better than us. It is not going to be better than us. It's reflecting to us who and what we are. The reason is this basically AI is all it is is a program that learns based on what we feed it.
(22:44):
Now what do we feed it? We feed it our products, we feed it our classism, our sexism, our racism.
Tom Bushlack, Ph.D. (22:53):
It's like our shadow in the Jungian archetypes of the human psyche.
Dan Daly, Ph.D., S.T.L. (22:59):
That's right. Yeah, yeah, yeah. There's a platonic element here.
Tom Bushlack, Ph.D. (23:02):
Yeah. But I want to also say we all have our shadow, we have our cultural shadows, but we also have our incredible virtues. So that's right. It's both. But the danger, if I'm hearing, correct me if you would say this differently, but what I'm hearing you say is if we only focus on the virtue and we ignore the fact that we're feeding into it all of our humanity, then we're going to maybe even be more blind to our own areas of human and or moral weakness. And that this could obscure that even further by giving us a false sense that like, Hey, everything's great and we got it. Figure it out.
Dan Daly, Ph.D., S.T.L. (23:44):
That's exactly it. So yeah, if we use ai, well, we can get past all of these social pathologies and the fact of the matter is that they can entrench them, but in an insidious way, in a way that you can't be, it's difficult to ask the AI to see how it's done as transparent as a mercenaries. I mean, ultimately a person can lie, but we can continue to question them. We can question people around them. It's very difficult as these AI systems become more and more complex to really get behind their quote decision making. And so again, you make a good point, AI has promise, and the document notes that it says even in the promise in health care is that in diagnostics and the possibility of expanding access to diagnostic technology, I mean, that could be a revolutionary value add that AI gives the world. So we should be excited about that.
(24:51):
But the way that technology goes is that we need to be critical as well. And there isn't enough criticism, especially of those who are making the decisions. I don't know Sam Altman, I don't know his character, but I do know that he's making some of these decisions and it's a little worrisome when he downplays the challenges or the threats and he amplifies all the benefits that is worth, and I think we should be questioning him and others when we look at AI in this way.
Tom Bushlack, Ph.D. (25:26):
Yeah, I want to unpack that piece. I think we've highlighted the shadow and the challenges really well, and it's kind of easy to do. It's just very Orwellian. It's right in your face. But if I just look at in number 72 of the document, that's the brief part that talks about AI and health care.
(25:49):
The document list, like you mentioned, assisting in diagnostics. And I was listening to a podcast with Eric who's a cardiologist, who's written a lot on not just AI, but a lot of things. He was citing some of the research saying when it comes to reading a colonoscopy scan or a mammography, AI can identify things that the best trained doctors in the world can't see or won't see consistently no matter how good they are. So it's understanding what it can do and then using that in that confined space where it's augmenting, which I know is part of the language you want to talk about, that same paragraph talks about it can be, doesn't necessarily mean it can be used to facilitate better relationships between patients and providers. That we might be able to come up with new treatments that are based on AI that again, the greatest minds might not be able to see or analyze as quickly.
(26:47):
There are ways in which it could help to expand access for more people by freeing up resources for providers. That one example I know is when I was working directly in health care before a program called DAX, that is an ambient listening software that listens to a patient encounter so that the physician is just looking at the patient they want to instead of their computer, and it's HIPAA compliant, it's secure, it creates a note, and I've heard providers say it saves them an hour a day, and that is enhancing their capacity to be present to their patients and expand access. That's just one example. So there are ways in which the technology can kind of cut both ways. And we talked about this earlier today. Traditionally in Catholic moral theology, technology has been seen as sort of neutral and then the agency, because technology, even AI doesn't have moral agency. We have moral agency as both as individuals and as communities. I think that's an important piece to hold up in the Catholic moral tradition that is not always taken for granted in the western tradition. Not just individuals, but communities, institutions have agency. So that traditional understanding that technology is sort of neutral. You were saying that there's a challenge to that here in ai. Can you unpack that a little bit?
Dan Daly, Ph.D., S.T.L. (28:23):
Yeah. So Tom, as you know, the traditional understanding is that every human technology is like a knife. It can be used to hunt and kill to feed your family or to kill a person to murder. So it's really in how it's used. The thing itself is neutral, but it's in its use that it gains its moral status.
Tom Bushlack, Ph.D. (28:42):
And the ends towards what is used to bring in some traditional teleological language folks that might be familiar with that language. And if you're not, it's okay. It's the purpose of why you're using the technology that determines kind of the moral outcome.
Dan Daly, Ph.D., S.T.L. (28:59):
And with AI, it's different. So Francis was on record by saying this is not morally neutral.
(29:08):
This technology, because it's a reflection of us, it has designed ends. The ends have already been designed by human beings. And so those, because there are ends that are inbuilt to the AI, unlike, I mean the end of a knife is only inbuilt to a certain point. It doesn't, again, the example that I gave of hunting and killing, defeat a family or murdering a person, it's very open. The tool itself is open to use as you noted specific end that I use it for AI is different in a way because these ends have already been determined. So is it simply to extract profit or be more efficient? And those things we can begin to, those can have a moral valence to them. They are not morally neutral. And so there's nothing wrong with profit and there's nothing wrong with efficiency. But when they're highly directed in a certain direction, highly directed, excuse me, to a certain end, then they do take on that valence.
(30:19):
And the document makes note of this as well, this is not a morally neutral technology. And so we need to be really critical about how it is developed and how we adopt it and adapt it for our own purposes and what the use cases are that we end up pursuing with these technologies.
Tom Bushlack, Ph.D. (30:40):
So I think that's a great lead in to talking about a So what, for folks listening who might be and working in health care, maybe they're doing more of the ethics mission side of it, but maybe folks listening who are executives and leaders in a tech department within a large health care system, what would be a positive, some guiding principles or takeaways that you think from the document, but even going beyond that in terms of your own work, what should people be thinking about so that we can say, these are the ends that we want to use this for and then design, whether it's products for patients that we use within the business tax side of health care, which is massive. What would be some easy takeaways that people could say like, yeah, this is a good way for us to use AI that enhances our mission, enhances the healing ministry of Jesus, and then what are some red flags that might misguide us in that?
Dan Daly, Ph.D., S.T.L. (31:47):
Yeah, it's a great question. I think when we look at the way that the document evaluates AI ethically is through a somewhat complicated ethical principle called integral human development. What does that mean? It means the full development of each person and the whole person, right? I think we could probably understand it in our own kind of idiom a little bit more clearly to talk about human flourishing. Yeah. Does AI promote human flourishing? The overall wellbeing of the person that is going to be the test of AI in Catholic health, in education, in the way that it's been used in criminal justice and government and so on. Does it promote the human good, the fullness of the human good? And you think about Mark 2, think about the gospel of Mark chapter two, the Pharisees are all over Jesus. And for picking grain on the Sabbath, it says, look, the Sabbath was made for man, not man for the Sabbath. The thing serves man, not the person serving. The thing that has to be central is that when we think about adopting these technologies in Catholic health, we think first and foremost about the patients. What is it going to do to the patients? Is it going to promote their wellbeing? And I mean that in the whole sense, not just their physical health, but their mental, social, spiritual, psychological wellbeing.
(33:29):
What does it do to the workforce? What does it do to the people who we've invited in to serve patients, the people we've invited into Catholic health to carry out the healing ministry of Jesus Christ? I think that's another, the document gets at this, and Francis spoke about this, and in fact, to connect this to Pope Leo XIV, Pope Leo XIV chose the name Leo XIV because of labor issues related to AI. He sees this as a new industrial revolution. The last, the Pope, Leo XIII, as you know, had that was Pope Leo XIII. And his contribution to Catholic social teaching was massive. And what he did was write about labor. He wrote about the importance of the economy, serving the laborer, serving the family, and not the other way around. So this was the end of the 19th century. You've got all of these labor abuses. So Pope Leo is reaching back to his predecessor, Pope Leo XIII, and is looking at AI in terms of what it's going to do to work. Now, why is work so important for Leo? Why was it so important for Francis, and why is it so important in this document and for the tradition? Why is it so important? Well, I think we would all do well to read the beginning of Laborem Exercens, John Paul II's encyclical in the early 1980s on labor.
(35:04):
And here, if you see, he draws on Leo XIII, and what does he write about labor? He says, look, there are three basic goods that labor serves, that our work serves. One is we can feed our families the support of our families. The second is the promotion of the common good. Through our work, we build up the common good. And I cannot think of an industry that builds up the common good more than he education, whatever. But I don't think nothing that does it more than health care. The third, a third reason, the third reason that our labor is so important is that it provides fulfillments. It provides a sense of self-satisfaction at a job well done. And for some people, the promise of AI is we don't have to work anymore. And if you ask John Paul ii or you ask Pope Francis, or you ask Pope Leo XIV or XIII for that matter, they would say That would not be a good thing for our labor leads to self-fulfillment. We discover ourselves, we develop ourselves through our labor. Now, labor can be oppressive labor. I'm not saying it's not always the case that we find fulfillment through our labor, but it can be the case and it is often the case,
(36:23):
And we need to make sure that we protect that part of our wellbeing when we think of these things, when leaders think of these things and Catholic health, they need to be considering what this does to those who find that self-satisfaction, that personal development through their work.
Tom Bushlack, Ph.D. (36:43):
Yeah. Well, I love that connection to labor. And then something you said earlier about human flourishing is a holistic approach that involves mental and spiritual health. And I remember a discussion in health care about bringing in a third party that would use AI to go into patient charts. And what it was actually looking for was the style in which the provider was writing the note and it was able to identify markers that that provider might be under distress in their job, maybe moving towards burnout, maybe having other issues that are coming up. And then in a very gentle and caring anonymous way could flag that provider directly and offer them, Hey, these are the things available to you, or flag the supervisor and say, you might just want to check in with this person. And similar things that can happen out of patient charts to notice.
(37:50):
And that is the one thing that large language models can do better than the human brain. I think that's fairly well established at this point, even if it has problems, is analyze just unbelievable amounts of data and information and make sense of it. And so there are ways in which that can be done to support the wellbeing of our coworkers and of our patients and of their families. And then there's the communal element. What do you see, again, whether it's from the document or just your own work, where does AI fit in population health and improving the health of entire communities or vulnerable populations?
Dan Daly, Ph.D., S.T.L. (38:31):
Yeah, I think there are huge applications in population health as well to identify the causes, the social drivers of disease, social drivers of health, to identify communities that may have certain needs more than others, and to divert resources to build resources in those areas. I think it could be revolutionary in public health. And that's a way in which when we think about the ethics of ai, we want AI to promote human dignity, human flourishing, and the common good that only exists when we all share in it. And as we look around, we see so many people left out of a healthy life of education, whatever the good may be, whatever the human good may be. And the fact that AI can begin to do these kind of population level analyses to identify those communities to more specifically identify the drivers of disease for those communities, I think is enormously beneficial that studies that have been demonstrating the importance of AI in these areas. So yeah, I think that enormous application
Tom Bushlack, Ph.D. (39:45):
In these areas. So I love what you said there about the ways in which AI could help with population health and identifying needs within a community. And then I want to tie that back to something that the document and you talked about earlier, which is the wisdom that is only possible among an informed human conscience, if you will, and what large language models are good at, which is not the same as wisdom, even if it looks like it. And so as I think about what leaders in health care might take away, particularly leaders who are doing this work, who have technological expertise in artificial intelligence beyond what I could possibly ever comprehend as a theologian and an ethicist too. Yeah, exactly. But we want that part of our goal in this is for people to have that takeaway. So in Catholic health care, pretty much every system that I've worked with or known or every ministry does formation.
(40:45):
And so they're taking leaders and executives and providers, and they're introducing them to these what I would very much call wisdom concepts, which is human dignity, common good, solidarity, being in solidarity and suffering and caring for others, the preferential option for the poor and vulnerable. So that's a great example of leaders, executives developing wisdom and then using AI in a pop health kind of model to identify needs and possibly even solutions in a community or vulnerable populations within a community. And we know that one of the things that AI does better than the human brain is take insane amounts of data and see patterns and analyze it. And so in that sense, it truly becomes what the document suggests we should think of this as is augmented intelligence. That's right. Instead of replacing human intelligence. So there's a way in which health care leaders formed by the tradition can use these really powerful tools that are augmenting their capacity to identify either issues within the community or maybe vulnerable populations within the community, possibly even which solutions or responses to those or most effective, and then use that wisdom component to say, now as a moral community of Catholic health care, here's how we're going to respond to this identified need or this population group.
(42:21):
And that is very hopeful and not very Orwell.
Dan Daly, Ph.D., S.T.L. (42:26):
Yeah, no, and Tom, I think that's a great point. I think one of the points that the health care section of Antiqua et Nova makes is we need to make sure that we don't abdicate our responsibility to make moral decisions. Well said, yes, of course we're going to use AI. We're already, it's already here. It's already here in the next 10 or 15 years. We don't quite know where it's going to go, but it'll be probably here in much greater ways than present. But we cannot outsource moral decision making to AI, as you use the word conscience earlier, Gaudiem et spes talks about the conscience as that quiet place where God is present to us and guiding us in making moral decisions. AI does not have a conscience, it does not have a soul, it does not have a spirit, but it can help us in our conscience, in making decisions in good conscience. And we would be foolish not to use it as a tool, as long as it's a tool that's developed in the right ways, developed in the right ways, and understood in the right ways. It cannot replace our moral agency, our context or our moral decision making.
Tom Bushlack, Ph.D. (43:38):
And whatever temptation there might be to abdicate that, that might be one of the biggest pieces of discernment that we are called to do as a community in using these technologies. A sense in which it would be irresponsible not to use them at all because of the power they have. And frankly, we're already doing it, so it's just happening. So it's a call to say, bring that spirit of discernment and wisdom and moral agency into it and not think that it gets us off the hook for doing that work. That's right. And it's hard work. I mean, let's be honest, it's a lot easier to study virtue than it is to practice as you and I both. That's a great point. So I mean, there are so many different ways we could go with this. I think we've done a really great job of comprehensively covering both the document and its implications. Why don't you us a little bit more about what the center is doing, what else is happening in the world of theology ethics, broadly understood. There are some events that people might want to know about coming up and just what they can expect.
Dan Daly, Ph.D., S.T.L. (44:45):
Yeah, yeah, thanks. So we at the center have put together an all-star group of scholars and people in the health care industry to write a white paper. So we've got a working group on AI ethics and Catholic health. The white paper, the group has already begun to meet in the middle of 2025, and the white paper should be done sometime in the middle of 2026. So I am hoping that that's going to be a landmark paper in discussing at a general level, broad level, we're not going to get into the nitty gritty. We're not going to be making decisions for people that's not the, the center to provide guidance for their own consciences to make decisions for their own systems and hospitals. But that white paper I am hoping will continue the conversation in terms of what the center is going to contribute. We're also going to run a conference at Boston College in March of 2026. So invitations will be going up to that. So we invite people to attend that conference. And that is on AI ethics and Catholic health. We've got a great group of speakers. We've got a keynote by the new president of the Pontifical Academy for Life, Msgr. Renzo Pegoraro. So he will be there in person.
(46:07):
The Pontifical Academy has done a lot of work on this issue. Msgr. Pegoraro has done a lot of work. So we're going to continue to kind of cultivate that discussion within Catholic Health. There's a conference at the Pontifical Academy in November, so if people are interested in making the trip to Rome, I'll be there. Not a bad place to be.
Tom Bushlack, Ph.D. (46:27):
Have a glass of wine with Dan while you're there.
Dan Daly, Ph.D., S.T.L. (46:29):
That's great. Yeah. Yeah. Look me up. So that's where the Pontifical Academy will be hosting a conference on medicine and AI. So another event that our listeners may be interested in. And then finally...
Tom Bushlack, Ph.D. (46:42):
Just to clarify, the conference at Boston College that it was sponsored by the center, our center, is publicly open, right?
Dan Daly, Ph.D., S.T.L. (46:52):
Yeah. There are moments of it that are open and moments that require a registration.
Tom Bushlack, Ph.D. (46:56):
Right? Yes. Right, right. Yeah. But I mean, in terms of registration, it's open. Folks listening, check out our website. The registration page is being built as we speak, so it's not live. But stay tuned if you want to attend that, because it really will be an incredible conference.
Dan Daly, Ph.D., S.T.L. (47:14):
Sorry, keep. No, no. And then finally, we've got a webinar series we've been running this whole year. I kicked it off. Paul Surez did the second one, did a phenomenal job. On August 5th, we'll have Marjika Grey and Anita Ho of CommonSpirit leaders on these issues. And Marjika Grey is a physician and also a leader in innovation. Anita Ho is a professor, an ethicist who also works CommonSpirit. They'll give a webinar on August 5th. So look for that. If you're listening to this podcast after August 5th, there'll be a recording that's available. And then finally the final webinar is Brian Anderson, who's the CEO of CHAI, Coalition for Health AI. That'll be in December. So look out for that. It's a really interesting organization, and he's a big player, and he is, he's a major player in the field of AI, AI ethics, AI and human flourishing. I'm on a subcommittee of CHAI's on AI and human flourishing. So I think those are very worthy listens and watches for our audience on AI ethics.
Tom Bushlack, Ph.D. (48:30):
And we can put the link to register for those webinars and show notes as well for people to find, as well as going to our website@theologyandethics.org. Anything that we, last words you want to say of Hope or anything I didn't ask you that you think is really important coming out of this?
Dan Daly, Ph.D., S.T.L. (48:49):
Yeah, I guess the end is to reiterate Pope Francis's point that AI is both a hopeful and fearsome technology.
(48:58):
And I think if we approach it in that direction, this is not our salvation. This is not going to solve all of our problems, our problems. We have technological problems, and it may help to solve some technological problems. You mentioned diagnostics. Technological problem needs a technological solution. The biggest problems that human beings face on this planet are not technological, they are moral, they are relational, they are social, they are spiritual. AI is not going to solve those problems. Only we can solve those problems through our virtue, through how we relate to others, through our spirituality. So we should know what AI can and cannot do. And this document does a phenomenal job of discussing both the possibilities but also limitations of AI. And that's why it deserves to be read by our listeners.
Tom Bushlack, Ph.D. (49:55):
That's a beautiful last thought. Thank you. And thanks for the work that you are doing and will be doing on this with the white paper, the conference coming up. So just invite folks to stay tuned and follow what we're doing at the center and beyond. So again, on behalf of my colleague Dan Daly, I'm Tom Bushlack, and this has been Ethics On Call. Thank you all for being here. You can find this podcast on YouTube, Spotify, Apple, and directly on our website, which is theologyandethics.org/podcast, theologyandethics.org/podcast. And as always, if you're on one of those platforms, podcasting platforms, we ask you to follow or subscribe to our page, leave a review and some kind of comment or feedback about the show. Those comments actually really help to grow our audience and build engagement. So we really appreciate you giving us that feedback. We'd love to hear from you. You can drop us a note through the website as well. Thanks again, wonderful to be with you. Look forward to next time, and we'll talk to you soon.