Cinema
Some AI generated, Church-related movies I created with Google’s new Vemo 2.
David W. Patten’s fun, 2nd-hand, late account of being visited by Cain
Joseph Smith writing D&C 121 in Liberty Jail
Moroni burying the plates
While AI has been able to do very short movie clips for some time now, I’ve waited for a while to do this post until it was somewhat passable. I think we’re kind of there now, but there’s still a lot of work to do. The clips are still quite short, and progress is still being made in fine-grained control over camera angles and the such, but we all know where this is going, and eventually filmmaking that previously took millions of dollars and elite-level contacts will be doable by any creative in their basement, so the homegrown Stephen Spielbergs of Mormonism won’t have to be born in the right place and right time with the right contacts to make his or her masterpiece.
Of course we aren’t there yet, or even close to there. I couldn’t quite get the gold plates to render correctly, for example, and Google’s famously censorious AI put more clothes on Cain than the account described, but for something I made in ten minutes the potential down the road should be obvious.
Changing Minds
A research team out of Zurich had AI bots infiltrate the r/changemyviews subreddit and try to, well, change people’s minds, finding that AI bots were three to six times more effective than humans at changing minds, revealing another potential AI use, that of Devil’s Advocate (or to develop your own argument). To test out this use case I had Chat-GPT 4o argue both sides on the question of whether an LDS person should vote for a pro-choice candidate. You can see the back-and-forth here. It brought up quotes I wasn’t aware of (and yes, I checked it for hallucinations), and generally brought to bear the best arguments you hear from the different sides in these discussions (obviously I am in a position to know, having had some experience with these back-and-forths).
Deep Research
Another big splash in AI land lately was the use of AI to create in-depth advanced reports on a topic. I had it run a summary of the literature on gay LDS mental health, for example, which has a lot of online noise but also a lot of nuance once you get into actual studies, and it did okay but not perfect (although it caught my papers, so it passed that bar I guess).
I had Google’s latest model (Deep Research with 2.5 Pro, currently on the month-long free trial plan) produce a report on the origin and development of the LDS doctrine of apotheosis, and it took about 15 minutes to create a 10,000 word essay on the subject on the subject which, while occasionally wandering a bit, exhibited nuance and and analytical rigor and (I believe) accurately tied the different strands of scripture, history, and theology together.
Comments
6 responses to “AI and the Gospel: Cinema, Changing Minds, and Deep Research”
Also, filmmaking that previously took thousands of dollars and contacts with the worst lowlifes in the region will be doable by any dullard in their basement, and the dullards will never leave their basements again. And it will be in everyone’s power to outsource online advocacy to bots that have no need to eat, sleep, or follow human instincts for honesty or good-faith argument.
Sorry, I’m feeling kind of pessimistic about this. Deep Research sounds cool, though.
Trying to decide if we’re going to get Dead Internet Theory, epistemic chaos, or maybe the e/accs are right and AI will lead us into all truth.
The last seems least likely.
Hi Stephen C: I have an idea for another installment in your continuing series about AI. It might also address the pessimism that Jonathan Green (and likely others) feel in the face of ever-expanding AI capabilities. It involves addressing the debate of how humans can stay ahead of AI.
Two noteworthy articles on this topic appeared this past week: “Will the Humanities Survive Artificial Intelligence?” in The New Yorker and “A Philosopher Released an Acclaimed Book About Digital Manipulation. The Author Ended Up Being AI” in Wired. (If the paywalls get in your way, I found gift links available through the social networking site Bluesky.)
You might even find it valuable to put these articles in conversation with Elder Bednar’s devotional titled “Things as They Really Are 2.0.” One of the key themes that could emerge from this conversation are the question of how humans can retain a sense of agency in the face of powerful and persuasive AI models, rather than becoming people who are “acted upon” (2 Nephi 2:26).
In the future I might do one about the downsides and AI safety and such, but for now these posts are mostly updates on the technology and what they mean for different use cases, so you’ll basically get a new AI post from me every time some new AI functionality comes out. Not that the downsides of AI aren’t worth talking about, but the field is moving so fast that how exactly the downsides are manifested and what it means for our culture and economy is still in flux. I suspect that some of the really big AI safety issues/downsides are still further off.
In the spirit of the original post, for the last couple of weeks I’ve been asking various AIs some of the stat questions I get at work. I wanted a better sense of what the researchers who go to AI before they come to me are being told. Generally the answers have been quite good. Frequently the code it generates won’t run as written, and sometimes the answers aren’t very helpful. (“Why is such-and-such happening?” seems to be a particular challenge.) But I haven’t gotten a clearly wrong answer yet, though I’m sure it will happen.
The main danger I see is that it makes no effort to find out what level of statistical knowledge the person has and adapt its recommendations accordingly. What started all this was a researcher who had gotten in completely over their head trying to run very sophisticated models when they probably only had one semester of undergraduate statistics, because that’s what an AI had told them to do. Needless to say they were missing the big picture and not using the models appropriately. Another thing I don’t see an AI doing is stepping back to ask what the research question is that they’re trying to answer, and making sure it’s well-defined and leads to testable hypotheses before diving into the details of how to test them.
The bottom line is that right now AI is a useful tool in the hands of someone who knows what they’re doing, but dangerous in the hands of someone who does not. Unfortunately, I don’t think that’s well understood. No one has been brave enough yet to tell me “I don’t need to learn this because AI will do it for me” but I’m sure some are thinking it.
In the spirit of the responses, it’s tragic and an indictment of our society and our political system that we have to worry about “how humans can stay ahead of AI.” I’m becoming more and more optimistic about the current and future capabilities of AI, and its ability to make the world a better place. But it seems likely that it will make the world a worse place for many of us instead, and that’s on us as a society.
“I don’t need to learn this because AI will do it for me”
Ugh, my son with his atrocious grammar and spelling right now. It’s probably similar to what math teachers went through when calculators first started coming out.
“The bottom line is that right now AI is a useful tool in the hands of someone who knows what they’re doing, but dangerous in the hands of someone who does not.”
Very true.