This article is part of the The Crossway Podcast series.
How Christians Can Thoughtfully Approach AI
In this podcast Samuel James and Shane Morris answer theological, cultural, and ethical questions about AI that were submitted by the Crossway audience. Both their expertise on the topic and theological backgrounds help give an even perspective on AI and form a thoughtful approach to how we can practically think through these questions.
Subscribe: Apple Podcasts | Spotify | YouTube | RSS
Topics Addressed in This Interview:
- AI Is Everywhere
- Are there any issues with Christians using AI tools as a source of spiritual information or guidance?
- Should AI be used to aid in writing worship songs, prayers, or sermons?
- What are the Christian ethical considerations for using AI-generated content that may violate copyright laws?
00:41 - AI Is Everywhere
Matt Tully
Samuel James serves as an acquisitions editor and director of content development at Crossway. Shane Morris also serves as an acquisitions editor at Crossway, and they both contributed to Scrolling Ourselves to Death: Reclaiming Life in a Digital Age, published by Crossway, in partnership with the Gospel Coalition. Samuel, Shane, thanks for joining me today on The Crossway Podcast.
Shane Morris
My pleasure. First time.
Samuel James
Thanks, Matt.
Matt Tully
I have a few stats to start us off today that I thought could be helpful to to share. According to a recent US poll, seven out of ten people in the US said they’d used AI tools—mostly chatbots—at least once, with nearly half of those people (five out of ten) saying that they use it daily or at least weekly. And then furthermore, earlier this month, Microsoft released a report based on an analysis of 37 million chat conversations with their AI tool, Copilot. And I just want to quote a little bit from what they said in their report, because I think it’s really interesting and telling as we think about where all this technology is moving. They write, "While searching for information remains Copilot’s most popular feature, we’ve seen a clear rise in people seeking advice, especially on personal topics. Whether it’s navigating relationships, making life decisions, or just needing a bit of guidance, more users are turning to Copilot for thoughtful support, not just quick answers." So that’s the landscape that we’re already living in today, and I think it’s safe to say that our society’s use of these AI tools is only going to continue and expand probably into more nuanced, more sensitive, more personal areas from here. To start us off, have either of you been surprised by how quickly these AI tools have been incorporated into our lives? Just a few years ago, an AI chat bot was something that very few people had ever used, they didn’t even know how advanced they could be, and now they’re just everywhere. Reflect on that initially. Are you guys surprised at how quickly this is all moving?
Shane Morris
I’ll just say at the beginning that I’m not surprised at all because the marketing push has been no holds barred. It’s been an unbelievable Herculean effort to sell the public on the idea that this needs to be in everything. And now it’s in everything. Some application of modern AI, whether that be chatbots or something else, is in every app, every device. It feels like it’s going to be in refrigerators and washing machines soon. The company says, "Now with AI . . ." Well, why do I need that in this particular thing? The question is not answered upfront. It’s just there, and then you get to sort of experience whatever it is the company has decided for you to experience there. So I’m not surprised because it’s not been a demand-side thing, shall we say. It’s been a very supply-side thing, and it’s been pushed very hard on us, so I’m not surprised that people end up using it. That’s kind of mandatory in some way.
Matt Tully
I think you’re right that there is marketing here involved and there are buzzwords that are involved that people then feel like, Oh, it must be good. I’m hearing a lot of buzz about this. But for those who have actually used one of these tools, there is something that is pretty incredible that is an experience that you maybe never thought a computer could do something like this before. We can’t deny the reality that these are offering something that people maybe didn’t realize they wanted, but they’ve promised to be and incredibly helpful tool. Isn’t that true?
Shane Morris
Well, I think it depends on what you’re using it for. If we’re talking about the kinds of applications that you just opened with—people seeking advice, people asking these chatbots to help them plan their lives or something—for me, that is very clearly beyond the pale of right uses of these kinds of things, because it comes down to a fundamental understanding of what it is. It’s not a person. There’s nobody there. There’s nobody home. Regardless of how cleverly the thing sort of mimics human conversation, that’s not what’s going on. And at a fundamental level, you have a statistical prediction engine that has scooped up a lot of information from the internet. Like, a lot. More than anything else ever has in computing history. And it is now drawing statistical correlations between things that are so numerous the creators themselves don’t understand what those connections are. But then it spits stuff out predictably based on what it’s scooped up. So, the real source of what it’s saying is a series of connections that it made based on the internet. Not any specified part of the internet, but just the internet. If you ask it for advice, there’s a way you could kind of conceive of that as saying, Hey, internet. What should I do here? And the internet would not give you a consistent answer. A Google search would clearly distinguish the different answers, but the chat bot is making decisions, if you will, based on it’s alignment. They had to tweak it to not be too sycophantic and so forth on what correlations it gives you. What site is it going to crib? What voice is it going to take? And none of those decisions are based on some sort of insider wisdom. They’re actually just predicting what it thinks you want to hear, as far as I understand the system. So I wouldn’t use it for that. I’d use it for a search engine, but not Hey, should I break it with my girlfriend? or Should I take this job over another job? without clearly seeing what the human sources of that advice are.
06:58 - Are there any issues with Christians using AI tools as a source of spiritual information or guidance?
Matt Tully
So there’s a lot of nuance and even understanding, as you laid out, how the system’s work can be really helpful for us as we seek to discern what the proper use of these tools might be. So let’s jump into some of the questions. We actually recently asked our listeners to submit questions for you both related to AI and, more specifically, how Christians should think about and use AI as Christians. And so, needless to say, we received many questions from around the world. Just to kick us off here, Samuel, one listener from Singapore actually wrote in, and we got a lot of questions very similar to this one. "Do you see any issues with Christians using AI tools as a sort of spiritual source of information or guidance or help, whether that’s helping me with a Bible study as I walk through God’s word, or going so far as to even providing advice or synthesizing Scripture’s teaching on a certain topic, whether it’s relationships, or whether it’s some doctrine of theology?" What are your thoughts on Christians exploring tools like this to serve those ends?
Samuel James
It’s a really good question, and I think it’s a question that we can only answer if we have an accurate picture of what these LLMs, these large language models are actually doing and how they work. And we’ve alluded to that, but just to kind of go back and put a spotlight on it for those who are listening who may not be aware, the LLM—and that’s chat, GPT, Claude, and these other programs that you might go to and kind of have what feels like a conversation with—these large language models are trained on billions and billions and billions of bytes information. And so what these programs are actually doing is identifying a logical response to your input based on how that input correlates with the response around the web. Essentially, what you have is you have a computer that’s scanning the entire internet for words and phrases that match what you put in and are going to predict what the accurate or desired response would be based on other people or other articles or other entries that use your similar language. So it’s purely a predictive machine. I think that’s really important for two reasons. One, it actually raises the question of where this information is coming from. The question of Is it okay for Christians to go to AI programs with spiritual or biblical questions? well, you need to be aware that when AI answers you, it’s not answering you in the form of whatever you think it should be. If you want AI to be an evangelical pastor, AI does not become an evangelical pastor. If you want AI to be a conservative seminary professor, AI does not magically become a conservative seminary professor. Instead, what AI is doing is pulling from billions and billions of sources, the vast majority of which you may have no idea exists. I’ll give you one very practical example of that. There was a meta-analysis done a little while ago of the source of information that AI programs use when they are given relationship questions. Should I get a divorce? Should I break up? What are the signs that I should pursue this person? The number one website for seed material for AI is Reddit. And if you know anything about Reddit, Reddit is a massive online community that is overwhelmingly, extremely secular, extremely therapeutic, extremely anti-Christian values. For example, if you, as a Christian, were to go to AI with a question about your relationship, the AI does not know that you’re a Christian and is going to give you Christian responses. The AI is going to scan Reddit and give you what it thinks is the most popular, most desired prompt. Essentially, what you’re doing, as a Christian, is you’re going into a dark room of a building you’ve never been before, you’re sitting down in a chair opposite a shadow that you can’t see the identity of, and you’re asking it personal questions of your life, and you have no idea who’s sitting in that chair because it’s completely dark and shrouded and you have no idea. So to answer the question about spiritual matters, it is really important to know what AI is doing. And there’s also the reality that AI is not human. AI does not think. AI does not reason. AI does not contemplate. It is a machine. It can’t do that. What the AI can give you is data. That’s the source of AI’s usefulness is that it’s a very quick, very efficient producer of data. But data is not wisdom. So, you can go to AI and you can say, Hey, what are the top twenty Bible verses that talk about forgiveness? And the AI will give you really good data. It will show you top twenty Bible verses. It might even give you a pretty good interpretation of those verses and a pretty well-written summary of how to integrate those verses into your own life. It can absolutely do that because it’s really good at reading data like that and reproducing it. So yes, is there a use case for AI when it comes to database questions? Absolutely. And people should be mindful of the benefits that could come from accessing that storehouse of data. I do think my concern is that just knowing who we are as people, the use cases of data just blends so seamlessly into other use cases. Give me twenty verses on forgiveness can easily become, ChatGPT, do you think I should forgive this person? Here’s what they did to me. Now what you have is you have what I think is a violation of the foundational principle for Christian use of AI, which is that Christians must always discern human from inhuman. That’s a biblical-theological category. Christians have to discern human from inhuman, and when we ask inhuman things to give us human wisdom, there’s a confusion there that is related very closely to the biblical concept of idolatry.
Matt Tully
As you said, these tools are based on human-generated data—vast quantities of human data—so one response could be, Well, this is a human response. It’s a synthesis of a lot of human responses that are brought together into this single response here. So, is it really true to say it’s not human input? It’s not as if a machine was just left alone on a deserted island and somehow developed these insights. This is all distilled down from other humans.
Samuel James
Data that’s produced by humans is not the same thing as a human. For example, if you were to devise a system where somebody, instead of being a parent to your children, they would come in and just basically read parenting books and they would follow what the parenting books do, but they had no relationship to your children. They did not want to know your children. They simply kind of reproduce the data that they were gleaning from the books. You would not call that person a genuine parent. You would say that they’re simply reproducing parenting data instead of being an actual parent, because parenting is something that only a human can do. It’s an irreducibly human dynamic. I recently wrote an article that will be up a Desiring God in the next little bit, and one of the examples I gave for this very question was if there was an aspiring counselor at your church who said, I really want to counsel people in your church, but I do not want to get to know them, I do not want to pray with them, and I do not want to know anything beyond what they tell me in the moment, you would say that’s an inhuman trait for a counselor. You would say they are not qualified to be a counselor. AI is only inhuman. There are only inhuman traits intrinsic with AI. So again, you have efficient data and you might have data that was originally written by humans, but that’s not the same thing as an encounter with another human.
Shane Morris
And it’s not the same thing for another reason, Samuel, and I love that analysis of it. If you remember a few months ago, Grok, which is the native AI on Twitter (now X), had a sort of Nazi fugue state. It just became this Groyper that cracked jokes about how the Jews were in control of everything. And it even did gas chamber gags and stuff like that. I don’t remember all the specifics, but it was out of control and it was national news that you would ask Grok something, and Grok would respond as if it was this kind of bitter, isolated teenager on Reddit or X. This happened because it turned out that was where it was drawing it’s training data from. So the fact that it’s human generated doesn’t really answer the most fundamental question about the kind of data that it’s synthesizing and giving back to you in the form of a chat, which is, Which humans? Whose beliefs? And I like to kind of extrapolate that to a hypothetical of an AI large language model that was trained in a different culture and in a different place or time. If you had one of these things where its server farm gets set up and you train the large language model on the German literature of the late 1930s, you are literally going to get a Nazi AI. At no point is it going to say, Hang on, wait a second. This doesn’t make sense with all that I’ve learned about human rights or Christianity. It wouldn’t do that. It would simply draw the conclusions that predominate in its training data. And so what you have in our time is, in large part, LLMs that draw the conclusions that predominate in their training data, which is secular, roughly left-leaning in many ways, therapeutic. It’s definitely not giving you Christian wisdom. And there may be ways to sort of narrow that down a little bit, but the magic is happening behind the curtain. Like Samuel said, it’s like one of those witness protection kind of things on the other side of the table. You don’t know who that is you’re talking to.
Matt Tully
And that’s such an important emphasis that, again, it’s not necessarily just that it’s human, but it’s what humans and what kind of data is being fed into these machines. One thing that we’re seeing happening increasingly, though is a lot of these companies are allowing individuals and other organizations to use their own data to train these machines, to train these models. At least on the surface, that holds forth the question of, What if I’m a pastor and I want to use a tool that has been trained on the history of Reformed theological reflection and thinking (biblical reflection) that’s coming from a particular theological tradition and insight—what would be the problem? Wouldn’t that address a lot of the concerns that you guys have just raised here? One listener from Johannesburg, South Africa noted that there’s always been this tension between exploring and meditating on God’s word independently, individually, doing the work yourself, especially as a pastor preparing a sermon or teaching and leading your congregation. But then there’s also been this impulse, rightly, to rely on and derive insight from the collective wisdom of the church throughout history from other believers. And so if there was a model trained on and in a distinctive theological tradition on resources that were trusted and reliable, couldn’t that address a lot of the concerns that you guys might have about where we’re getting this information to begin with?
Shane Morris
A model that’s trained on the content which you then sort of use as a glorified search engine. In other words, like let’s say you take the entirety of Schaff’s Church Fathers collection and you train it on that and you say, Okay, chatbot, give me every instance where any church father dealt with the topic of the humanity of Christ, like the incarnation. I could see that sort of thing being very useful because, as opposed to a traditional search engine which relies on keywords, this thing will have the ability to kind of connect words and concepts and go, Alright, so this concept, even though those words don’t appear, is in Athanasius where he talks about this, and then just deliver up a list of links. I could see that application being very useful. This kind of summarizing power or raw mathematical ability to just connect data points is no substitute for wisdom, and I think you ultimately do need human wisdom in that process. But the search engine thing, just to concede a point, I think that is useful. I’ve used even the Google version. What’s the Google version called?
20:47 - Should AI be used to aid in writing worship songs, prayers, or sermons?
Matt Tully
It’s up at the top. And the tricky thing is that these tools are being integrated into our daily lives in ways that we can’t always avoid, and it’s just going to be a part of our experience. And I think as you guys both have already acknowledged, there are nuances here. The distinction between a glorified Google search providing data and sliding into synthesizing and distilling that down and summarizing that, and even reworking the inputs into new ideas or new concepts, can be a little bit hard to distinguish at times. Here’s another question we got from a listener in Chile: "I recently came across an AI-generated song about the holiness of God, and it was based on Scripture and the writings of R. C. Sproul. And honestly, I was moved and I was blessed by the song. I was genuinely edified as I listened. So, how do you feel about using AI for music and even prayers and other kinds of worship-oriented things, when we actually can sense that they are edifying to us and we have a sense that they are faithful to Scripture?"
Shane Morris
Were you both at the recent TGC conference? And do you remember what John Piper did on this, where he gets up and reads an AI-generated prayer and kind of has everyone sort of taken in. They’re like, Should we go along with this? Is this all right? And then he asks, "Is this a prayer?" He pauses and goes, "No!" And he just pounds the podium and yells, because a machine can’t pray. And he drives the point that I would want him to drive home in that moment, which is that we alone are appointed as the image of God and the worshipers of God. What amounts to a series of soldered connections really cannot do that. It can produce a kind of simulacrum of that, but it can’t actually do it. We need to be the ones that do that. So that’s a funny example of the ambiguity that’s sneaking into these conversations. But the separate question is, What is art? What actually is music? There was a really great essay earlier this year or maybe late last year by Ted Chiang in The New Yorker, and it was called "Why AI Isn’t Going to Make Art." And his contention was that no matter how much an LLM spits out what looks like art, it’s not actually art on a fundamental level, nor could it ever be, because art is intention. He thinks intention, or the decisions made by a conscious entity, are the sine qua non of art. They are just what it is. And we know that because if you see a pretty configuration of rocks, you don’t say, Ah! It’s art, because that wasn’t intentional. Nobody did that. If someone writes you a heartfelt love note, but then you find out it wasn’t them and it was just a machine and they didn’t actually even look at it, that doesn’t mean anything anymore because intention is what’s actually valuable to us. Even if it’s just a single word, like a single meaningful word, if you know it came from a person, it has a meaning. If it didn’t come from a person, that word no longer has any meaning. With music, as with art more generally, that is a crucial question that’s entirely separate from whatever the theological accuracy of something is.
Matt Tully
Taking that distinction then, something that was purposely created, let’s press that into maybe a category that we could see being an issue—and actually is already an issue—when it comes to AI and thinking about the church and Christianity. We’ve already seen the development of some AI-powered sermon-writing tools and resources that, again, offer pastors a shortcut to developing a sermon based on a passage of Scripture. Starting off with just a simple outline, perhaps, based on a passage, going all the way through to maybe a full manuscript that they could deliver. As a thought experiment, let’s just pretend somebody asked a question like this: If someone were to generate a sermon through one of these AI tools and post it online, and then a Christian were to encounter that sermon and not even know that it was AI-generated (maybe reading that transcript or listening to someone actually deliver an AI-generated sermon) and felt ministered to, the question is, Could the Holy Spirit actually lead somebody through that sermon? Can someone actually be edified spiritually through that sermon or through a prayer that maybe was AI-generated but delivered by a person or redelivered to the Lord sincerely, with heart and true devotion? Is that something that God can and does still use?
Samuel James
I think the answer to the question has to be yes. That could certainly happen. People have come to Christ by picking up gospel tracks that they had no idea where it came from, had no context for it. I think awareness of the human origins of something is not a prerequisite to the power of language. I think we can all be moved. And this kind of goes back to the question about the music. I think Christian wisdom requires us to think in broader categories than sin/not sin. So, if someone is coming to this issue saying, Is it a sin if I listen to AI-generated music or if I am encouraged by an AI-generated sermon? my answer is no. It’s not sin. I can’t bind your conscience in that way, and I don’t want to bind your conscience in that way. But we really want to think in categories beyond elicit versus illicit. We want to think in categories of helpful, and we want to think in terms of the plausibility structures that these things are building in our lives. Every time we turn to a piece of technology, we are accepting a story about what life should be like. This is the way all technology works, whether it’s a jet airplane, a screwdriver, or AI. So the question about the sermon, could the Lord lead someone to faith through an AI-generated piece of content, absolutely. Yes, the AI bot can take what Christians have written and put it out as output, and someone can encounter that truth and be pointed toward Christ. Absolutely that can happen. The reason I would counsel a Christian to not habitually seek that out or not habitually utilize AI is that there is something we lose when we completely divorce our thinking and our language from the human source. For example, in the musical category, one of the things I would tell someone, and one of the things I would put into practice myself, is that of course you might find a piece of AI-generated music inspiring or catchy or fun. That’s totally legitimate. I imagine it happens every day now. One of the concerns I have, though, is that people often write their own music and their own poetry because they want to hear something spoken to them. There’s a particular truth or a particular imagery that you’re seeking out, and you don’t find it in the song books that you have, and so you write it. That is a major source of art throughout history. I worry about our ability as people to be songwriters, to be poem writers, to be authors if every itch we have can be scratched by a machine pumping out a curated version of what we’re looking for. There’s something that diminishes the desperation to experience something that drives creation if you can automatically receive whatever you can imagine through a machine. And I think there’s cultural evidence of that. So anyway, to talk about the gospel element of it, when someone encounters a book or a gospel presentation by a person, I do think that there are reasons that that human person phrased that gospel presentation the way they did. And it was their own experience. It was the need of the occasion. Maybe the particular preacher or particular writer phrased something in a way that you’ve just never heard before, and the lights come on. And so God uses the experiences and the temperaments of people to direct his gospel in certain directions that are fruitful for listeners. And so by taking the human element out of it completely, I think we do stand to lose some of the power and effectiveness that comes when God’s people write and proclaim and share and compose out of their own human, embodied experiences that the Lord has given them.
Matt Tully
Samuel, you mentioned art and the way that an over-reliance on these AI tools could hinder that. But I think that even applies more so to our spiritual lives as Christians. If we get into the habit of relying on AI to help us formulate a sermon outline or to help us formulate a prayer that we intend and we would then genuinely pray to the Lord, that could start to hinder our ability to think about God’s word in a robust way, our ability to think about our own lives and our own needs and formulate a prayer. And so I do think we’re going to lose practice with some of these fundamental Christian pursuits if we are too dependent on a tool that just makes it so easy, so fast to do what seems like the same thing.
Samuel James
I do think if a pastor has an AI-generated portion of a sermon—and I’m not necessarily talking about outlining or a timeline or something like that. I’m talking about if he is speaking words to his congregation that came to him through an AI program, as a congregate and as a church member, I would want to know that. I would want him to say something like, Now, what I’m about to say, I asked this LLM for. It’s not Scripture. Take it for what it’s worth, but I thought it was helpful. I would come away from that thinking, Well, I wouldn’t have done that, but I can have sympathy for why he did. If a pastor is not disclosing that and is actually preaching to his people from an LLM without saying anything, I think that’s a problem.
Shane Morris
And there’s something even more technical and intrinsic to this process that is a real loss and that will result in further loss and degradation. It’s that not only will you get bad at writing sermons or writing songs, but AI will get worse at writing sermons and writing songs, because AI is a river that can never flow higher than its source, which is it’s human-generated training data. So, you see this problem of model collapse happening now, where companies are actually paying lots of money for pre-AI data from the 2020ish internet, before these things started dropping, because that internet is pure human content. And that’s precious for training data. It’s not corrupted by AI output that’s difficult to distinguish. And so one of the more even technical and I guess you could say non-moral reasons for real hesitation to allow AI to do what we call art or writing or anything in the realm of the humanities, or certainly spiritual output, is that as that enters the training data and the web, the actual output will get worse and worse in the future. So, there’s something intrinsic to it that’s parasitic. It robs humans of the ability to do the amazing stuff that the large language models are just imitating by virtue of having trained on that in the first place. You create a feedback loop, Samuel, as you get more people who are resorting to the sort of lazy creation of art, music, or writing with AI. They lose the ability to do it themselves. They’re no longer contributing to the great mass of human output. And it’s not hard to foresee a scenario where people just lose the will to contribute creatively—that amazing spark of humanity that we’re seeing refracted in AI and AI itself. Because it has no ability to create anything originally, it now degrades and we have nothing left. That’s the cycle that I see unfolding.
Samuel James
That’s a great point, Shane.
34:10 - What are the Christian ethical considerations for using AI-generated content that may violate copyright laws?
Matt Tully
And we’ve already established that all of the training data that goes into these AI models is human-generated information that is then being synthesized and reworked and mashed together in a way that then provides answers to these questions that we might be typing in.
And yet many people have acknowledged that a lot of this training data was scraped from the internet and other sources without a lot of permission from the original creators. One listener in Melbourne, Australia asks, "How should the fact that many generative models were trained on copyrighted material, copyrighted content, without the consent of the copyrighted holders factor into Christian ethical considerations about using these tools?" What would you guys say to that?
Shane Morris
My thinking on this, and I’ve had trouble convincing people that this is actually an issue, I think the intellectual property and "thou shalt not steal thing" is a very big deal when it comes to AI. And it goes back to that idea of intention. Let’s say you read The Lord of the Rings as a kid, and you were really inspired by Tolkien. Your dream is to grow up and write your own fantasy series that’s sort of in the same genre. And in many ways, people can detect echoes of Tolkien in it. What you have done there, the thing that meaningfully distinguishes that from Tolkien, is your creative input, your intention and decision making at 100,000 million points along the way, where you chose to go this way and not that way. And that makes it your original work. It’s transformative. AI cannot do that. It cannot make an original decision. It cannot inject intention. All it can do is run calculations on which thing is most likely to come next, given the training data. So you scrape Tolkien, you scrape George R. R. Martin, you scrape Robert Jordan—pick your fantasy author—and then you tell it, Spit out a story that’s like this. All you’ve done is run a very complicated algorithm that basically changes every fourth word. That’s a crude way to put it, but that’s kind of what you’ve done. And if you picked up Tolkien and you ran an algorithm on it that said, Change it to like Toto Baggypants, like in VeggieTails, the Tolkien estate is going to be giving you a phone call. You can’t do that. But if you do it in a more complicated and subtle way, all of a sudden we get ambiguous about it. Is that okay? Is that not okay? And I think it shouldn’t be ambiguous because even though the product may be a little more sneaky, the process is fundamentally the same.
Samuel James
I think there’s a boiling frog effect. You know the metaphor how do you boil a frog? You just put him in cold water and you gently turn the water up, because the frog won’t realize what’s going on until it’s too late. And I think that metaphor is apt for our relationship to data in the internet age. And I have felt convicted over how much of my personal data and the images and information about my children I willingly give up to social media platforms. I think Christians are overdue, and I’m speaking to myself as much as anyone here, for a very serious conversation about how much control and how much information we are allowing up to big tech firms in the things we post, the pictures that we post, and the updates that we post. I think there’s a boiling frog effect, where we’ve all acclimated to the idea that, yeah, I know, technically, the terms of service on Facebook mean that this picture of my son, Facebook owns now, and they can pull this picture for any reason and they can use it for any reason and they can sell it to China if they want to. I understand that the terms of service are that way, but I just really want to post this picture. Now we have a situation where someone can spend weeks, months, years of their life creating something out of the abundance of their cGod-given creativity, and a big tech company can say, That’s great. We are going to take that, give you nothing, and we are going to train our machines on how to talk like you on it, so that somebody can get a cheap imitation of your work for absolutely free and no credit to you. I think there’s something tragically wrong with that, but I think that progression started years ago with the surrendering of our own data and our own sense of control over our lives to these social media platforms. So I do think it’s a major ethical concern. It’s actually the reason I mentioned about the pastor offering the disclaimer if he’s gonna teach AI, because he has no idea who he’s quoting. If he’s quoting AI, there could be some good pastor, bad pastor, neutral pastor, it doesn’t really matter—there’s somebody behind those words, and he has no idea who they are. And I think that requires a high level of self-awareness rather than just assuming that it’s okay, assuming that this is helpful, assuming that that no one would be upset with this. And not trying to to overspiritualize it too much, but I think there’s an eschatological element to this, where we are going to be accountable for every word we speak. And I think that is true for the words that we didn’t think were ours. When we use these AI programs or we use technology and we copy and paste and utilize, if we ended up stealing from somebody and neglecting the work that they did, we are accountable to the Lord before that. It’s not going to work to say, Well, I had no idea of knowing who that was. The Lord’s just going to say, You knew how these technologies worked, and yet you felt like it was worth the risk. So I do think that there is a very strong ethical consideration, but I think we’re overdue for conversation at the front end of what do these tech companies receive from us as people, and maybe how should we change the way we interact with the internet as a whole?
Matt Tully
That was part one of my conversation with Samuel James and Shane Morris, answering your questions about AI and the Christian life. Stay tuned for part two, which releases on Wednesday, January 28th. Samuel and Shane contributed to Scrolling Ourselves to Death: Reclaiming Life in a Digital Age, published by Crossway in partnership with the Gospel Coalition. Pick up a print copy of the book for 30% off, or get the ebook or audiobook for 50% off directly from Crossway by visiting Crossway.org/plus.
Popular Articles in This Series
View All
Crossway is a not-for-profit Christian ministry that exists solely for the purpose of proclaiming the gospel through publishing gospel-centered, Bible-centered content. Learn more or donate today at crossway.org/about.










English (US) ·