0:00
/
0:00
Transcript

Some Thoughts On AI

Venturing in

Prefer to print and read? Here is this article in pdf form.

The Fakeness of the Machine

The other day I was scrolling though substack—which feels different than scrolling through other parts of the social internet, but frankly, I’m not convinced it’s any more productive—when I came across this gem of a note by Griffin Gooch:

This made me laugh—because it was 1) boring, 2) insipid, and 3) very clearly not the sort of thing Griffin would write.

But it did get me to thinking: what would happen if I asked it the same thing? I don’t use ChatGPT, preferring Claude1, but I gave it basically the same prompt: write a substack note in the voice of Will Dole. Now, given that I don’t write for Christianity Today or have 4,000 plus substack subscribers, I had to give it a little help by providing links to this site and to our church substack, which has most of the sermons I’ve preached over the past few years. That ought to give an LLM the info it needed on my writing.

It still wanted a prompt in terms of topic, and so I decided to be meta and give it the prompt of “thoughts on AI.”2 And after it combed through my writing and then received this prompt it proceeded to spit out not a substack note-length piece of “writing”, but a full substack article-length piece [this is not that product].

I was surprised by how well it reproduced my style, in the sense of presenting an issue, attempting to come at it from an unexpected angle, and seeking to resolve the issue with principles based on Scripture. It even did some of the Eugene Peterson-inspired allusions to Scripture that don’t directly bear on the topic but that do shed light from a slantwise angle that I try to include in my writing. That was, frankly, impressive. It even was able to predict that the guy who writes Stopping to Think would be more than a little concerned about how we might end up using AI to short circuit thinking. Again, I consider that impressive.

But, this much was also clear: anyone who has read more than two pieces of my work, with any level of attention, would surely be able to tell that what Claude had churned out was not my voice. It hit the right topics. Included a lot of the same ingredients. And still felt, basically, soulless. Because it was.

What is Writing For?

If you asked me, “what is writing for”, my short answer would be: thinking.

The use of language to move ideas, concepts, and images from one mind to another is the most human thing possible. Made in the image of the speaking God of Genesis 1, Psalm 33, Isaiah 55, and John 1, human beings are creatures who think and speak.

And, though evolutionary biologists disagree, the Biblical record would indicate that, perhaps not from the very beginning, but at least very early in human existence, human beings write. Writing and reading are not natural in the same way speech is. They are skills that must by learned, honed, and developed over time. Though some folks are born with more aptitude than others (as is true with any skill), no one is born reading and writing. The capacity to communicate with words that can go beyond your presence—further than the limits of your voice and ears—is a further extension of this human gift of language. Reading and writing are uniquely human activities.

This is part of what makes the outsourcing of reading and writing—be it “content” on a blog, an essay for your school paper, or the idiotic suggestion I read on substack for people to talk to ChatGPT to get summaries of classic books instead of reading them for themselves—so concerning. When the LLM summarizes the data for you, you haven’t had to think and wrestle and chew through it yourself. When you have ChatGPT write the paper, you fail to gain anything from the actual process of reading, meditating, agreeing, disagreeing, articulating, or changing your thoughts.

When I’m writing, I’m exploring ideas as much as I am trying to communicate them. In the process of writing, I am gaining clarity on what I think about this subject or that. Even in my sermon writing process, though I prefer to preach without a written manuscript, I will often till write the sermon (or at least key parts of it) in order to make sure I have thought the issue or the statement or the transition through. There are an awful lot of ideas that feel great in your head, until the start coming out of your mouth. Or pouring out before you on the screen or the page.

If you outsource that work—and yes, it is work—you’re also outsourcing the clarity and growth that come with it.

How I Use AI

So, all of that to say, I’m rabidly anti-AI, right? No, not exactly. I’m still grappling with how best to use (or not use) the various AI tools out there. Here is where I’ve landed for now, though.

I’m not a Luddite—I’m not completely opposed to all technology invented after some randomly selected date. But I am deeply skeptical. It has often been noted that technologies don’t just give abilities, they actually shape us. They shape the way we think and the way we act. In the case of AI, based on what I wrote above, I would argue that the way many people are using AI is actually reducing their capacity to think clearly at all.

The most important thing I do is think. If a tool is functioning to bypass or short-circuit that process, what good is it doing? If the drive to be more “impactful”, more “efficient”, or “more” whatever robs me of the opportunity to be more human, and therefore, more fully imaging my Creator, then is it really moving the ball forward for my life? Is it truly doing my congregation any good if I used ChatGPT to draft a better conclusion to a sermon if that conclusion wasn’t the result of my labor in study and thinking and growth? I would argue that, even if the paragraph is better than what I could have written on my own, we are all worse off (see 1 Timothy 4:14-16).

If the whole idea of this newsletter is to ask you to stop and think—how could I, with any sense of honesty, do that if I haven’t first done so? I would rather not post than post and have it feel like a shortcut or a lie.

But I still do, sometimes, utilize AI. Here is the line I have drawn: I use AI tools to capture, not create.

Here are two examples of what I mean by that.

  • Very often I have an idea or set of thoughts which I don’t have time to work out in writing. But I want to get them spit out into some usable form or fashion. So I pull out my phone and record a voice note. I then upload that voice note into some form of (AI-powered) voice transcription software. And then, because those software (at least the free ones) often do a marginal job of formatting what they’ve transcribed, I will upload a .txt file of the transcript to an LLM—usually Claude—and give it a prompt something along these lines: “format to paragraph. remove timestamps. retain original wording.” I then treat what it gives me back as a first draft which I go over and rework, reformat, and refine until it’s where I feel comfortable publishing it.

    In this process, AI has not done any of the creative work for me. It has essentially served as an unpaid amanuensis. I suppose, hypothetically, that some folks would consider the use of an amanuensis to not be real writing. However, if luminaries such as Thomas Aquinas and the Apostle Paul (Romans 16:22) used such scribes in their writing, I don’t really feel out of bounds to call such speaking a form of writing. Further, when I sit down then to edit, I am not simply going over the work of a LLM-generated piece of copy. I’m working over my own thoughts, my own ideas, and—crucially—my own words.

    Occasionally I have written pieces for this newsletter that way.

  • The other way I have used AI is very similar to what I wrote above, but with something closer to a finished product. As I noted earlier in this piece, I prefer to preach with either an outline or with no notes—not always, but usually. However, one of the main drawbacks to this is the lack of a manuscript for future reference and use.

    So, when I upload the sermon audio here to substack (remsenbible.substack.com), I will often then follow the process I outlined above: download the .txt file from substack, upload it to Claude, and then prompt with “format to paragraph. remove timestamps. retain original wording.” If I have time to go over that manuscript and make sure everything is correct, I will post it no differently than if I had written the manuscript prior to preaching. However, I usually don’t have that time available on Sunday afternoon when I’m doing this, and so I put a little notice at the top along the lines of: “this transcript was generated by AI: please let me know if you notice any errors.”

I think there is a crucial distinction to be made before an after-the-fact transcript and a before-you-preach manuscript. If my manuscript were AI generated, again, I think I would be cheating myself of growth, my church of spiritual nourishment they would receive as a result of my personal study and growth, and I would feel pretty Acts 5:1-11, Ananias and Saphhira-y about the whole thing. But to record what has already been said? And put it in a readable format? That seems, at least to me at this point, like a blessing of our current technological age.

Concern

That final point is what makes me concerned when I see posts urging places like substack to add “AI free” badges to the work of writers and artists. I agree with the motivation and thought process behind those desires—writers want it to be clear that they aren’t dependent upon machines for their ideas, that their work is, in fact, human. I’m here for that all day long.

I also couldn’t put that label on all my work, for the reasons outlined above. And as a result, I think it could create confusion. In the same way that “organic” came to describe a government certification which was expensive and difficult to maintain, rather than a set of practices possible for any farmer to exercise, I worry that “AI free” could come to describe a certain set of writers whose work is substantially the same as mine, but my first draft got on paper differently.

Having said that—I’d rather live in that world than in one where my kids and their peers are allowed to outsource their thinking to engineers in Silicon Valley. That’s not a world we want to live in.

Reality Theology with Griffin Gooch
Why ChatGPT Might Not Be Sinful But It Is Definitely Sad and Kind of Lame
Read more
1

As I outline below, one of my primary prompts is “retain original wording.” Claude, in my experience, follows that prompt. ChatGPT, in my experience, changes wording and summarizes, despite the instructions I give. As a result, I refuse to use it.

2

When I use “AI” in this article, I am referring to Large Language Models (LLMs) like Groc, ChatGPT, and Claude. I know there are other forms, but this is what is relevant to me at the moment.

Discussion about this video

User's avatar