This past year AI moved from something dabbled in by specialists and researchers to the front lines of education, business, and the creative arts thanks in large part to the break throughs provided by ChatGPT. But how should we think about it as followers of Christ, raising healthy youth in a tech world?
Today we’ll discuss three benefits, and three concerns, of AI and what we can do to be discerning creators and consumers of this newest tech on the digital frontier.
Transcription:
Nathan [00:00:08] Heavenly Father, thank you for this chance to talk about technology. Help me speak wisely in this and open our eyes to see in our ears to hear Your words. Hope and truth as we talk about artificial intelligence today in Your name.
Nathan [00:00:21] Hello, everyone, and welcome to the Gospel Tech Podcast. My name is Nathan Sutherland, and this podcast is dedicated to helping families love God and use tech. Today we are diving in the deep end of technology because it just needs to be said and we’re not diving in long, so you can see from the length of this episode that it’s going to be a quick deep dive. But we do need to address artificial intelligence. We need to bring up this topic because it’s already impacting families everywhere. If you have digital technology that connects to the Internet, you’re being impacted. You don’t have to know everything about it because honestly, nobody has all the information right now, including the people that are making it. And that’s something I want you to know because as a consumer, I need you to be informed. And as a follower of Christ, I need you to be courageous. You don’t have to be scared of A.I.. It’s not something your brain can’t conceptualize. It is really heavy and really big and really deep. And therefore we’re diving in because your kids are going to be engaged in it. And I want you to come at it with more than just, “Ahh I don’t really get it so I don’t care.” Because you should care and I’ll explain why today.
Nathan [00:01:22] Today we’re going to talk about three benefits of A.I.. There are some amazing things that are coming out of this, and I can absolutely see how the Lord will use it. And three cautions. I will say I was more optimistic before the release of ChatGPT, also known as GPT-4. That was the 2023 release. So whenever you’re listening to this just now, that’s the version I’m talking about because there are some learnings from that and also some cautions that we can apply hopefully for our next experiments in this area that is definitely growing. It is not going away. It will be the next stage of our technological development in the digital world and that’s really what we need to address, is how does that affect our families and our kiddos primarily? And then there’s other stuff we can say. So that is the arc of today’s conversation. Consider this your primer on A.I. For at least the initial discussion. And with no further ado, we’ll get this conversation started.
Nathan [00:02:22] Welcome to the Gospel Tech podcast, a resource for parents who feel overwhelmed and outpaced as they raise healthy youth in a tech world. As an educator, parent and tech user, I want to equip parents with the tools, resources and confidence they need to raise kids who love God and use tech. Thank you to everyone who has helped make this podcast possible. Thank you for listening, for liking, for subscribing and thank you for going to Gospeltech.net and donating. We’re a 501c3 nonprofit and this work only happens because of your support. So thank you for being on this journey with us.
Nathan [00:03:01] This conversation today. What do we do with A.I.? The three benefits, the three cautions of A.I. Is something near and dear to my heart because A.I. Is amazing and it’s incredible to watch as people have developed this super, super powerful technology. And it’s also kind of scary because of how we’ve done it. And it’s played out a little bit like we feared it might in some way. So first, let’s just talk what is artificial intelligence? When we use artificial intelligence in today’s conversation, we’re talking about the version that currently exists. So GPT-4. There are other versions out there that haven’t been released to the public as of this recording date in June 2023. But the general idea is you have a machine that can learn from itself. That’s kind of our premise. It’s not just an algorithm, it’s not just a math problem that’s static and it just does the exact same process. So Google for years or Meta through Facebook, has used algorithms, these really intense math problems that can use a series if/thens. And then based on this flow chart, it can get to some very intelligent and analytically deep responses. So it can tell this person is interested in this stuff, so I’ll give it more of that. A.I. has been trained to learn which is different. So if you go to GPT-4, they’ve toned it down a little bit for the public side, but the premise is this thing learns and it actually learns from itself. So GPT-5 the version that hasn’t been released to the public, Sebastian Lubeck, a Princeton professor who now works at Microsoft, has been able to do some hands on training. And he said basically like, yeah, this thing combed through all of the Internet and taught itself so it now can learn and it can respond to your questions and it can, you know, give you great insights into articles and things like that. But it ran out of content on the Internet, so it went through all the podcasts in the world and all the videos on YouTube, and it transcribed all that to text, and then it trained itself off that and it’s able to train itself and then assess how well it did in learning that information and then keep just the good stuff. And then it started creating its own information. Right? So this is A.I., this ability to learn, just like a little kid learns. It has questions and then it locks into that information, and then it can apply that information in future responses. And if you haven’t used GPT-4 or also known as ChatGPT is kind of the major version that we’ve all come to know, and that’s been implemented through Microsoft products through like Bing. It’s really cool. So let’s talk benefits. The benefit of something like an A.I. versus an algorithm is it’s super smart. It’s very, very good at finding solutions. So I’ll go two directions with this. You’re already using it. So if you’ve ever done a Bing search or if you’ve ever and you’re like, “Bing, why would I ever do it? I search Google out of Bing.” And that is true. All the search engines will have this, but it’s very good at finding solutions. You might have search something and you’ll notice instead of it just bringing up a list of results that actually will give you a snippet of an article. So for like if I’m looking up something on A.I., it’ll give me a snippet of an article and then it will highlight the section that most relates to what I was talking about. A.I. is basically like you have an intern that is finding you the information you want. It doesn’t always get it right. Right now, again, it’s been out for less than a year in this format for public use, but it is really good at cutting through the clutter and finding new information. And that’s a huge benefit. So that’s number one, is its ability to find a solution and I guess I mixed in number two because I said cutting through the clutter is actually the second.
Nathan [00:06:36] So let’s talk about that one first because I was the example I used sorry for the confusion, but the idea of being able to cut through the clutter. We live in a world where right now we’re beholden to some massive trillion dollar companies ability to give us a list. And we assume page one number one is the best option. And there’s been some cool, you know, algorithmic pieces where they can try to base it on your location or your search history or triangulate based on who you’re close to or, you know, your purchasing habits. And that’s interesting. But at some point, that may not be the information you need or want. You just want to be able to make the decision on your own. And A.I. has the potential to do that well. To cut through the noise of I mean, to the point where online textbooks for schools are getting different information based on where you live. And it’s not lying, but there are paragraphs that are there or aren’t there. Time magazine did a piece on this. I’m not making that up. You can go find the photos of the same topic in different areas. You’re getting this micro edited information because it’s what your area wants to hear. A.I. has the potential to do that very terribly and really edit information, but it also has the ability to cut through some clutter and give you the ability to get to primary sources more quickly and to make your own thoughts and opinions. And in a world that is just swimming in data points, A.I. is the thing that can help make some of those decisions and be an editor based on your preferences. Done right, A.I. would allow you to set your preferences, would allow you to make the call and would give you the information you’re actually looking for. Instead of making those decisions for you before you ever get a chance to even know it exists. So that’s cutting through. The clutter is a big thing A.I. is going to be able to do.
Nathan [00:08:15] A second benefit is this idea that it can find these solutions. And so I think the best example of this is in 2006. Man I might be making up that day, but in 2006 I’m going to say that that’s when this happened. It was very early A.I. in a research format, it replicated with the team, it was able to replicate the Nobel Prize winning 2001 research experiment that created the Bose-Einstein condensate. This substance that exists just a fraction of a degree above absolute zero. So -273.15 degrees centigrade. This idea that it is almost absolute zero, it has the ability then you can do experiments on this because all the particles act like one giant particle, and that’s super hard to make. And basically they’ve got the research team was able to get it almost done, and then they gave the lasers to the A.I. and it was able to do things that no human could do. I mean, pulsing the laser power and using them in ways that like a singular person wouldn’t be able to coordinate well enough or even a team. But it can do that. And that’s really cool. A.I. is going to be able to do some things that like we know the solutions are out there. We just can’t quite figure them out because we’re a little too clunky and A.I. can fill in those gaps. It can do what we tell it to do and it can do it better than we can. And when we’re dealing with science at the highest level so it can find solutions that we didn’t even know were an option. The pulsing of lasers is not the way the experiment was meant to be done. It just knew what it had to do. And it knew that this could could be done when working with the magnetic fields that it was dealing with. So that’s really cool and really impressive and very hopeful.
Nathan [00:10:01] The final piece is A.I. is going to be really good in helping us accomplish goals. So again, it’s good at breaking through the clutter. It’s good at accomplishing goals, and that means we have some goals. Just take microplastics, for example. We have little bits of plastic, like microscopic little bits of plastic that never degrade in the snow of the Arctic because it evaporates. It’s so small. It goes up with with evaporation condensates and rain falls as snow and then just stays there and then critters get it in their system. So we have birds all over the world that have microplastics in them, even though the birds aren’t ocean birds because it’s rained microplastics in places that shouldn’t have. That’s concerning to us. We can’t just filter that out. It’s too small. But there are ways, hypothetically, to maybe eat that with like an algae or something. A.I. is going to help us figure out what what would be good at consuming that particular compound without consuming the planet. Right? We don’t want to go too far.
Nathan [00:11:00] What can we do? I do want to note there is some concern, especially when it comes to some of the artistic and creative sites. I mentioned A.I. can be like having an intern. You can go to ChatGPT and say, “Hey, make me an outline for…” And you just tell me what a five paragraph essay, this other idea. And we use those things a lot, right? We have templates in Microsoft Word, we have templates in Canva. Having a template is not a bad thing, even if it’s for coding for like a website. That’s not bad. We still need creative minds. A lot of what A.I. is replacing right now. I’m not saying that A.I. will never step over into stealing other people’s ideas. We’ve seen that with the DALL-E and some of these other A.I. driven artistic engines where they’re stealing other people’s ideas and mashing together. That’s bad. You can also fix that with A.I. You can tag everything it is possible to be done. A.I. is plenty smart. It’s just a matter of motivation. So while we are concerned for the artistic side, trust me, artistic individuals, people with a mind and a giftting to know good ideas from bad ones, you’re going to need to help guide A.I., because at the end of the day it can generate really cool stuff. But we still need the brains that can help it decide what was the best decision, help guide and I guess focus all of this power that it has. So I do not worry for our artistic and our creatives in the world, I actually believe A.I. is going to be one of our best helps for getting through some of just the mundanity of what we currently need. Right now, the coding levels for a lot of these giant companies, there are people just crunching code that shouldn’t have to be done by a human anymore. To the point where like we don’t have anyone when an excavator was made. We don’t have anyone bemoaning that they don’t get to dig ditches by hand anymore, right? Like, Oh, man, I wish I got to dig that canal with a shovel. Like no one says that. I wish I could die in a cave in. Like, that’s not what we say. We have excavators. We have professionals that use those excavators, even the excavators that are automated. Like there are people making sure those things work and there will be people doing this work too. A.I. absolutely needs the steering hand, even if it could do it on its own. One of the major pieces is making sure it doesn’t like. That’s really, really important because I can be trained really well, but we don’t always know what’s going on with it and that’s where we bump to the concerns.
Nathan [00:13:18] But before we wrap it up, I just want to state those three again. A.I.’s really good at cutting through the clutter. It’s going to be able to accomplish task that humans either can’t do well or at all, or we can’t do efficiently. And that’s the third piece, is that it’s going to be really, really good at helping us accomplish tasks we can’t yet do. So I use microplastics solutions for things like desalination, getting salt water to be clean water. If you want that to be efficient and able to help people in some of the poorest, driest parts of the world, A.I. is going to be the process for getting us there in a way that is actually affordable. Things like fusion energy making a star in a bottle. This idea of making electric power that is efficient and can be available to billions and not have to be burning off coal like it is available, we can already do it a little bit. We had one, I think 2022. One of the real firsts, I believe, is the Tomahawk fusion reactor started and we got energy outputs that were in that net gain. So the basic premise being how much energy does it take to start it and run it and how much energy does it produce? They got to produce more than it took, which is the first time in human history that’s a huge deal. Then they shut that thing down. They got a lot more testing. But A.I. is going to be one of the things that will help us monitor that because fluctuation is such a thing, because these are so finicky to do that kind of work, you need something that just operates a little faster than the human brain and a little better than a math problem. We need it to have wider parameters and A.I. can do a lot of this work for us. But A.I. is also deeply concerning. If you are concerned about A.I., you absolutely should be. Please continue to question it. The book Superintelligence is a great book to read. It’s about ten years old and it basically is like you’re reading about the Stone ages, the Stone Age excuse me, of A.I.. These are the most futuristic thoughts from ten years ago, and they don’t come close to what is actually happening. In fact, the guy who invented or who co-founded Openai, which made ChatGPT, which now has been purchased by Microsoft, he was talking about how A.I. has happened in the exact opposite route. We thought it was going to replace manual work and then was going to move on to kind of some of the mediocre, you know, like working with accounting and stuff like numbers and number crunching and then was going to get to the creative kind of next. And then the final thing it would be would be, you know, scientific research and some of those really big thinking. And it’s gone exactly the opposite. It took scientific research first and then it’s gone creatives. And now we’re working on how can we implement this in manual labor and some of those other features. And that idea of we need to keep asking like, “Are we doing this right or are we doing this well?” The book Superintelligence didn’t see that coming and these are the people who work in it. And that’s I think the number one thing to note is with A.I., we are not ready. I’m not saying this as a you know, the sky is falling. This isn’t Chicken Little. I am a pretty big tech optimist that might surprise some of you, but I really am. And I had someone write me last year was like, “Hey, what do you think about A.I.?” And my whole response when I looked at it on Instagram. My whole response was like, “It’s amazing. It was all the first half of this.” And then Tristan Harris and then Professor Brubeck, we have these Sebastian Brubeck. We have these people who are in the field saying, “Hey, we just want you to know that we don’t know everything you think we know.” For example, Tristan Harris, he’s from the Center for Humane Tech. He’s the guy who created the social dilemma. He made a little A.I. based YouTube video with another guy from the Center for Humane Tech called the A.I. Dilemma. Just look it up on YouTube. You can see it. Basically, his argument is we’re not being responsible with the release of GPT, so GPt-4 from open A.I., we’re not doing this right. There was a survey asking A.I. researchers, “Will A.I. be the end of humanity?” And 50% of the researchers said there’s a one in ten chance yes, that’ll be it. The A.I. is going to end humanity and be the cause of extinction. And his example, Tristan Harris, goes on to say, “What if that was airplane engineers?” Like, what if you said, Hey, there’s a 50% chance that you get on this plane? And if half the engineers said there’s a 10% chance that you get on this plane and you die. A one in ten chance per flight and 50% of the engineers are saying it’s like that…No one would get on the planes, they’d be grounded forever. We had planes that were three planes that crash out of the hundreds of thousands of flights that are happening, three planes. And it was grounded for two years because that’s not acceptable. We need to do better. That’s the answer with A.I. right now. We just…We’re not ready.
Nathan [00:17:57] Tristan Harris points out we weren’t ready for social media. We weren’t ready for the mental impacts, the social impacts, the massive economic impacts of social media. And we’re still teasing those out. In fact, at the time of this recording, Jonathan Haidt, that guy I’ve talked about before with Jean Twenge, they have a 350 page social media research piece that says social media is not great for kids. Jonathan Haidt, just this past week released an article titled I’m not going to remember the exact title, but it says. We need to get smartphones out of schools. Google it. Jonathan Haidt. H-A-I-D-T. Phones out of schools and it’s full charge. And I can get you some documents on how to push for your school board and for your school leadership to get rid of smartphones in schools. It’s not a joke. It’s not a question anymore. We now know. We don’t know with A.I.. And that’s one of the things we’re concerned about. We just don’t have the information on top of that. Not only are we not ready socially for what this thing can throw at us, we don’t even know what it can do. And there’s two examples Tristan Harris shared one. The first example was we didn’t realize when GPT-4 came out and it was released to the public that it had trained itself to be a research level chemist. Like I had a buddy who worked for Xylogenics before they shut down. He went to work for a different company and in his lab he could make all sorts of stuff and he was doing work with cancer research, but he had the chemicals to make all sorts of things. But those chemicals are under a high grade security. They’re in a very specific building in a specific part of the lab, and there’s a very specific chain of events that has to happen to get access to that in a very specific location. So, yes, he has a lot of information in his head, but he’s bound to a specific time and place where that stuff could happen and there’s lots of oversight and documentation, etc.. But GPT just went to the world and now just anyone in the world who has the motivation now has that research based chemist with them, like our research grade chemist. Like, that’s concerning. There are things we don’t want people in the world making and we don’t even know what that did yet. Like that happened this year, 2023. We don’t know the outcome. We might not know for three, five, 20 years, but it’s out there and someone who’s a bad actor or someone who wants to do something dangerous now can because we released a product that we didn’t even know had that feature. Because A.I., the whole premise is it trains itself. For example, when being trained, GPT-4 was trained to respond in English and the model was doing a great job. And then one day it responded in Persian and it wasn’t told to. No one trained it to speak Persian. It just decided to do that and now is fluent in Persian. That’s concerning because as Jeff Dean, the senior vice president of Google A.I., said about this, we don’t know why. We don’t know why A.I. does certain things. That is concerning. That fits under the category of we’re not ready, like don’t release a product if you don’t know how or why it works. That is deeply concerning. It’s concerning in ways we can’t even fathom. And I think it’s best summed up by a, Oh man, where’s his name? Right here. Dr. David Nguyen, the University of Minnesota, when asked, “Should we be terrified of A.I.?” His answer was, “I don’t know.” This is a professor, someone who is at the forefront of the field, who’s doing this work on a regular basis and actively working with these resources. Maybe you have an opened up ChatGPT. Maybe you have an engaged A.I. knowingly on any purposeful way or tried to learn about it. Maybe haven’t read any books or any articles. I’m telling you, the people at the front of the field, the professors from Princeton who now work for Microsoft, the professors at universities who are regularly working with the simply don’t know the researchers who are doing it, don’t know. Maybe there’s a one in ten chance this thing ruins us. Maybe there’s a nine in ten chance it doesn’t? Like, okay, but we’d like you to be more certain before you just throw this out to the public and maybe double check what it can do and can’t do. That’s our first concern.
Nathan [00:21:55] The second concern with an A.I. is its ability to man just destroy what we understand for information and communication and privacy. So I’ll take those on twofold. Information communication is deepfakes and misinformation is going to be intolerable. Like if you thought social media was bad now, I would strongly encourage you to walk away. There will come a point in the next couple of years where deepfakes this idea that a video was created using someone else’s voice and face will be indistinguishable from reality. You could be on a Zoom call. You could be watching a video, you could be actively engaging this person through text or through social media. And it’s not a person at all. It’s an A.I.. That is already happening. But what we have ahead of us for the next couple of years as this technology really gets rolling, is GPT-5 the next iteration as Sebastian Brubeck shows, it’s significantly stronger than the version we’ve got. And the next few years it’s only going to make stronger versions. You just there’s no way to know if it’s a fake. It’s going to be real time emulation of disinformation and fake information. And that happens on the…Hey, I got a call from someone I love. This happened within weeks of GPT-4 coming out. People were getting calls that mimics the voice of someone you care about so they would be able to end a three second sound clip. And so, for example, you could get a call from me. My voice is on the internet. They could just download it, emulate it, type whatever they want. And it would make a perfect emulation of Nathan’s voice, which a couple of years ago investing firms were using your voice for your security password because like, Oh, well, this is unique to you. It’s not anymore. As of this year, 2023, that’s no longer true. So that’s going to be problematic when we try to use the Internet then to convey truth and hope and political freedom and any kind of ideal, it is going to absolutely shake to the core what we know to be true and to be the way the world currently works. So you will still be able to know reality when you’re with somebody. But augmented reality and virtual reality are actively being pushed as a massive front for the future of politics and faith and the way school works. And that information is getting more difficult to discern and differentiate. So please know that that’s a real thing. And I would say one of the most concerning we see in application right now is Snapchat. Pinned at the top is in A.I. bot for relationship and it simply encourages you to come talk to it. It wants to be your friend. It wants to form a parasocial relationship. And the idea is well people should never be alone, So we’re making a snapchat bot that’s concerning. Don’t do that, don’t use it. Run like it’s the plague. It is not safe. It is not a good idea. It has not been vetted. It is not there for your child’s well-being. Please do not use the chat bot. It might be humorous, it might be funny. It will tell you if you say, “Hey, do you know where I am?” It will say “No.” If you say, “Where’s the closest McDonald’s?” It will tell you so. It is not made for public consumption yet. And while there are absolutely relational and hopeful benefits that an A.I. could help with, that the creator of GPT said he basically wants it to be the Holy Spirit. He wants a constant companion for you that is always giving you the best advice, always giving you the best encouragement, and always helping you become the best version of yourself. Those are his words. I added the Holy Spirit part, but we have a Holy Spirit for that. Like that’s what Jesus won us. That’s what the point of the Gospel is, is to not become better versions of ourselves, but to die to ourselves and become more like Christ. And as soon as your goal is A.I. for self-actualization that I’m going to use this A.I. and I’m going to prove to the world that I am better because this A.I. helped me get there, we’ve already missed our mark. That is not your call. That is not your purpose. That is not your design. And even if you become better at something, you’re giving up a lot because you’re listening, you’re inner compass is directed towards A.I. and at some point you’re going to quit discerning. Or are you going to quit differentiating what is true and what is not because you’re simply looking at will, what is best for me? That’s where things break down.
Nathan [00:25:59] Which brings us to our last piece, which is the relational fractures this is going to make, I think, the most concerning piece when it comes to relational fractures is everything we understand right now is built on trust. Our economic systems built on trust. Dollar bills are just IOUs. Our political system is built on trust. We have to believe those votes were cast. And we’ve seen that start to disintegrate a little bit. Just wait until A.I. gets it’s hand on stuff and you don’t know what’s true and what’s not true. We believe society will function, that people will follow traffic rules, that crimes will be prosecuted and will not be allowed. And as soon as that trust breaks, as soon as, you know, 100% certain someone else is holding up their end of the deal, everything collapses. And that is what A.I. the potential to do. Both on the disinformation side from outside sources, but also on our own end. How can you tell if anything is real, if it’s based on the Internet? It is concerning. Now, the answer to that is generally A.I. is going to be the solution. And you will not put this genie back in the bottle. This is the nuclear bomb of the digital age right now. It is the thing that will most change other than quantum computers, which you haven’t heard much about. I haven’t heard much about it. I actively look for information on them. But quantum computers mixed with A.I. will change the history of humanity. But right now what we have is the A.I. part and it’s not going away. It doesn’t have to be bad. There are some really cool pieces. It absolutely will be used to progress the gospel, mainly because it’s going to be so much more efficient and you’re going to cut through the clutter of outside people’s ability to fidget with what you’re trying to do. We’re just going to need more A.I.. And no, you’re not going to lose jobs forever. It’s not going to put everyone out of employment because there’s still people digging ditches. They just use the machines that are way better. And let’s be serious. Someone’s going to be the first counselor for an A.I. when it gets depressed or when it gets…Right? When it gets off the rails, you’re not going to delete the A.I.. You’re going to have to train that thing back into a healthier state. It’s worth too much and it’s too powerful. So trust me, there will be jobs for humans in a world filled with A.I.. No one complains that they don’t get to wash other people’s laundry anymore, even though that has job has been taken by your clothing machine and your dryer.
Nathan [00:28:15] So note that you will have a place, you have a purpose, you have a soul, and in God’s eyes you have value. What you do in the economy of a world with A.I., I’m not sure there’s going to be some benefits, but there are deep, deep concerns. We’re not releasing this properly. We’re not ready for it. We’re not vetting it right. Disinformation is going to be rife and it is going to affect the foundation of what we can trust. So please engage in a local church body, get tied in in relationship. That is what humanity needs right now, is to be known and to know others so that we can be pointed towards the God who knows and loves us, who died for us while we were still centers and raise this up to new life that we can love others out of the love He’s given us. That’s true in spite of A.I. and it’s true through A.I..
Nathan [00:29:02] So I hope this is encouraging. I hope this made sense and help you kind of understand where A.I. is right now and helps you engage the conversation with your young people when it shows up in social media. Understand that yes, there’s amazing benefit engagement here and there’s some downsides. And no, don’t just trust what A.I. says. The whole reason you still need to learn information is you have to be able to process what A.I. puts out and go, “Is that true?” We still need to discern. We still need to turn to the word and be that kind of mindset of let’s test and improve everything that comes out of one of these. So thank you guys for listening. Please share this with your friends if you think it would encourage them and join us next week as we continue this conversation about how we can love God and use tech.
Nathan [00:00:08] Heavenly Father, thank you for this chance to talk about technology. Help me speak wisely in this and open our eyes to see in our ears to hear Your words. Hope and truth as we talk about artificial intelligence today in Your name.
Nathan [00:00:21] Hello, everyone, and welcome to the Gospel Tech Podcast. My name is Nathan Sutherland, and this podcast is dedicated to helping families love God and use tech. Today we are diving in the deep end of technology because it just needs to be said and we’re not diving in long, so you can see from the length of this episode that it’s going to be a quick deep dive. But we do need to address artificial intelligence. We need to bring up this topic because it’s already impacting families everywhere. If you have digital technology that connects to the Internet, you’re being impacted. You don’t have to know everything about it because honestly, nobody has all the information right now, including the people that are making it. And that’s something I want you to know because as a consumer, I need you to be informed. And as a follower of Christ, I need you to be courageous. You don’t have to be scared of A.I.. It’s not something your brain can’t conceptualize. It is really heavy and really big and really deep. And therefore we’re diving in because your kids are going to be engaged in it. And I want you to come at it with more than just, “Ahh I don’t really get it so I don’t care.” Because you should care and I’ll explain why today.
Nathan [00:01:22] Today we’re going to talk about three benefits of A.I.. There are some amazing things that are coming out of this, and I can absolutely see how the Lord will use it. And three cautions. I will say I was more optimistic before the release of ChatGPT, also known as GPT-4. That was the 2023 release. So whenever you’re listening to this just now, that’s the version I’m talking about because there are some learnings from that and also some cautions that we can apply hopefully for our next experiments in this area that is definitely growing. It is not going away. It will be the next stage of our technological development in the digital world and that’s really what we need to address, is how does that affect our families and our kiddos primarily? And then there’s other stuff we can say. So that is the arc of today’s conversation. Consider this your primer on A.I. For at least the initial discussion. And with no further ado, we’ll get this conversation started.
Nathan [00:02:22] Welcome to the Gospel Tech podcast, a resource for parents who feel overwhelmed and outpaced as they raise healthy youth in a tech world. As an educator, parent and tech user, I want to equip parents with the tools, resources and confidence they need to raise kids who love God and use tech. Thank you to everyone who has helped make this podcast possible. Thank you for listening, for liking, for subscribing and thank you for going to Gospeltech.net and donating. We’re a 501c3 nonprofit and this work only happens because of your support. So thank you for being on this journey with us.
Nathan [00:03:01] This conversation today. What do we do with A.I.? The three benefits, the three cautions of A.I. Is something near and dear to my heart because A.I. Is amazing and it’s incredible to watch as people have developed this super, super powerful technology. And it’s also kind of scary because of how we’ve done it. And it’s played out a little bit like we feared it might in some way. So first, let’s just talk what is artificial intelligence? When we use artificial intelligence in today’s conversation, we’re talking about the version that currently exists. So GPT-4. There are other versions out there that haven’t been released to the public as of this recording date in June 2023. But the general idea is you have a machine that can learn from itself. That’s kind of our premise. It’s not just an algorithm, it’s not just a math problem that’s static and it just does the exact same process. So Google for years or Meta through Facebook, has used algorithms, these really intense math problems that can use a series if/thens. And then based on this flow chart, it can get to some very intelligent and analytically deep responses. So it can tell this person is interested in this stuff, so I’ll give it more of that. A.I. has been trained to learn which is different. So if you go to GPT-4, they’ve toned it down a little bit for the public side, but the premise is this thing learns and it actually learns from itself. So GPT-5 the version that hasn’t been released to the public, Sebastian Lubeck, a Princeton professor who now works at Microsoft, has been able to do some hands on training. And he said basically like, yeah, this thing combed through all of the Internet and taught itself so it now can learn and it can respond to your questions and it can, you know, give you great insights into articles and things like that. But it ran out of content on the Internet, so it went through all the podcasts in the world and all the videos on YouTube, and it transcribed all that to text, and then it trained itself off that and it’s able to train itself and then assess how well it did in learning that information and then keep just the good stuff. And then it started creating its own information. Right? So this is A.I., this ability to learn, just like a little kid learns. It has questions and then it locks into that information, and then it can apply that information in future responses. And if you haven’t used GPT-4 or also known as ChatGPT is kind of the major version that we’ve all come to know, and that’s been implemented through Microsoft products through like Bing. It’s really cool. So let’s talk benefits. The benefit of something like an A.I. versus an algorithm is it’s super smart. It’s very, very good at finding solutions. So I’ll go two directions with this. You’re already using it. So if you’ve ever done a Bing search or if you’ve ever and you’re like, “Bing, why would I ever do it? I search Google out of Bing.” And that is true. All the search engines will have this, but it’s very good at finding solutions. You might have search something and you’ll notice instead of it just bringing up a list of results that actually will give you a snippet of an article. So for like if I’m looking up something on A.I., it’ll give me a snippet of an article and then it will highlight the section that most relates to what I was talking about. A.I. is basically like you have an intern that is finding you the information you want. It doesn’t always get it right. Right now, again, it’s been out for less than a year in this format for public use, but it is really good at cutting through the clutter and finding new information. And that’s a huge benefit. So that’s number one, is its ability to find a solution and I guess I mixed in number two because I said cutting through the clutter is actually the second.
Nathan [00:06:36] So let’s talk about that one first because I was the example I used sorry for the confusion, but the idea of being able to cut through the clutter. We live in a world where right now we’re beholden to some massive trillion dollar companies ability to give us a list. And we assume page one number one is the best option. And there’s been some cool, you know, algorithmic pieces where they can try to base it on your location or your search history or triangulate based on who you’re close to or, you know, your purchasing habits. And that’s interesting. But at some point, that may not be the information you need or want. You just want to be able to make the decision on your own. And A.I. has the potential to do that well. To cut through the noise of I mean, to the point where online textbooks for schools are getting different information based on where you live. And it’s not lying, but there are paragraphs that are there or aren’t there. Time magazine did a piece on this. I’m not making that up. You can go find the photos of the same topic in different areas. You’re getting this micro edited information because it’s what your area wants to hear. A.I. has the potential to do that very terribly and really edit information, but it also has the ability to cut through some clutter and give you the ability to get to primary sources more quickly and to make your own thoughts and opinions. And in a world that is just swimming in data points, A.I. is the thing that can help make some of those decisions and be an editor based on your preferences. Done right, A.I. would allow you to set your preferences, would allow you to make the call and would give you the information you’re actually looking for. Instead of making those decisions for you before you ever get a chance to even know it exists. So that’s cutting through. The clutter is a big thing A.I. is going to be able to do.
Nathan [00:08:15] A second benefit is this idea that it can find these solutions. And so I think the best example of this is in 2006. Man I might be making up that day, but in 2006 I’m going to say that that’s when this happened. It was very early A.I. in a research format, it replicated with the team, it was able to replicate the Nobel Prize winning 2001 research experiment that created the Bose-Einstein condensate. This substance that exists just a fraction of a degree above absolute zero. So -273.15 degrees centigrade. This idea that it is almost absolute zero, it has the ability then you can do experiments on this because all the particles act like one giant particle, and that’s super hard to make. And basically they’ve got the research team was able to get it almost done, and then they gave the lasers to the A.I. and it was able to do things that no human could do. I mean, pulsing the laser power and using them in ways that like a singular person wouldn’t be able to coordinate well enough or even a team. But it can do that. And that’s really cool. A.I. is going to be able to do some things that like we know the solutions are out there. We just can’t quite figure them out because we’re a little too clunky and A.I. can fill in those gaps. It can do what we tell it to do and it can do it better than we can. And when we’re dealing with science at the highest level so it can find solutions that we didn’t even know were an option. The pulsing of lasers is not the way the experiment was meant to be done. It just knew what it had to do. And it knew that this could could be done when working with the magnetic fields that it was dealing with. So that’s really cool and really impressive and very hopeful.
Nathan [00:10:01] The final piece is A.I. is going to be really good in helping us accomplish goals. So again, it’s good at breaking through the clutter. It’s good at accomplishing goals, and that means we have some goals. Just take microplastics, for example. We have little bits of plastic, like microscopic little bits of plastic that never degrade in the snow of the Arctic because it evaporates. It’s so small. It goes up with with evaporation condensates and rain falls as snow and then just stays there and then critters get it in their system. So we have birds all over the world that have microplastics in them, even though the birds aren’t ocean birds because it’s rained microplastics in places that shouldn’t have. That’s concerning to us. We can’t just filter that out. It’s too small. But there are ways, hypothetically, to maybe eat that with like an algae or something. A.I. is going to help us figure out what what would be good at consuming that particular compound without consuming the planet. Right? We don’t want to go too far.
Nathan [00:11:00] What can we do? I do want to note there is some concern, especially when it comes to some of the artistic and creative sites. I mentioned A.I. can be like having an intern. You can go to ChatGPT and say, “Hey, make me an outline for…” And you just tell me what a five paragraph essay, this other idea. And we use those things a lot, right? We have templates in Microsoft Word, we have templates in Canva. Having a template is not a bad thing, even if it’s for coding for like a website. That’s not bad. We still need creative minds. A lot of what A.I. is replacing right now. I’m not saying that A.I. will never step over into stealing other people’s ideas. We’ve seen that with the DALL-E and some of these other A.I. driven artistic engines where they’re stealing other people’s ideas and mashing together. That’s bad. You can also fix that with A.I. You can tag everything it is possible to be done. A.I. is plenty smart. It’s just a matter of motivation. So while we are concerned for the artistic side, trust me, artistic individuals, people with a mind and a giftting to know good ideas from bad ones, you’re going to need to help guide A.I., because at the end of the day it can generate really cool stuff. But we still need the brains that can help it decide what was the best decision, help guide and I guess focus all of this power that it has. So I do not worry for our artistic and our creatives in the world, I actually believe A.I. is going to be one of our best helps for getting through some of just the mundanity of what we currently need. Right now, the coding levels for a lot of these giant companies, there are people just crunching code that shouldn’t have to be done by a human anymore. To the point where like we don’t have anyone when an excavator was made. We don’t have anyone bemoaning that they don’t get to dig ditches by hand anymore, right? Like, Oh, man, I wish I got to dig that canal with a shovel. Like no one says that. I wish I could die in a cave in. Like, that’s not what we say. We have excavators. We have professionals that use those excavators, even the excavators that are automated. Like there are people making sure those things work and there will be people doing this work too. A.I. absolutely needs the steering hand, even if it could do it on its own. One of the major pieces is making sure it doesn’t like. That’s really, really important because I can be trained really well, but we don’t always know what’s going on with it and that’s where we bump to the concerns.
Nathan [00:13:18] But before we wrap it up, I just want to state those three again. A.I.’s really good at cutting through the clutter. It’s going to be able to accomplish task that humans either can’t do well or at all, or we can’t do efficiently. And that’s the third piece, is that it’s going to be really, really good at helping us accomplish tasks we can’t yet do. So I use microplastics solutions for things like desalination, getting salt water to be clean water. If you want that to be efficient and able to help people in some of the poorest, driest parts of the world, A.I. is going to be the process for getting us there in a way that is actually affordable. Things like fusion energy making a star in a bottle. This idea of making electric power that is efficient and can be available to billions and not have to be burning off coal like it is available, we can already do it a little bit. We had one, I think 2022. One of the real firsts, I believe, is the Tomahawk fusion reactor started and we got energy outputs that were in that net gain. So the basic premise being how much energy does it take to start it and run it and how much energy does it produce? They got to produce more than it took, which is the first time in human history that’s a huge deal. Then they shut that thing down. They got a lot more testing. But A.I. is going to be one of the things that will help us monitor that because fluctuation is such a thing, because these are so finicky to do that kind of work, you need something that just operates a little faster than the human brain and a little better than a math problem. We need it to have wider parameters and A.I. can do a lot of this work for us. But A.I. is also deeply concerning. If you are concerned about A.I., you absolutely should be. Please continue to question it. The book Superintelligence is a great book to read. It’s about ten years old and it basically is like you’re reading about the Stone ages, the Stone Age excuse me, of A.I.. These are the most futuristic thoughts from ten years ago, and they don’t come close to what is actually happening. In fact, the guy who invented or who co-founded Openai, which made ChatGPT, which now has been purchased by Microsoft, he was talking about how A.I. has happened in the exact opposite route. We thought it was going to replace manual work and then was going to move on to kind of some of the mediocre, you know, like working with accounting and stuff like numbers and number crunching and then was going to get to the creative kind of next. And then the final thing it would be would be, you know, scientific research and some of those really big thinking. And it’s gone exactly the opposite. It took scientific research first and then it’s gone creatives. And now we’re working on how can we implement this in manual labor and some of those other features. And that idea of we need to keep asking like, “Are we doing this right or are we doing this well?” The book Superintelligence didn’t see that coming and these are the people who work in it. And that’s I think the number one thing to note is with A.I., we are not ready. I’m not saying this as a you know, the sky is falling. This isn’t Chicken Little. I am a pretty big tech optimist that might surprise some of you, but I really am. And I had someone write me last year was like, “Hey, what do you think about A.I.?” And my whole response when I looked at it on Instagram. My whole response was like, “It’s amazing. It was all the first half of this.” And then Tristan Harris and then Professor Brubeck, we have these Sebastian Brubeck. We have these people who are in the field saying, “Hey, we just want you to know that we don’t know everything you think we know.” For example, Tristan Harris, he’s from the Center for Humane Tech. He’s the guy who created the social dilemma. He made a little A.I. based YouTube video with another guy from the Center for Humane Tech called the A.I. Dilemma. Just look it up on YouTube. You can see it. Basically, his argument is we’re not being responsible with the release of GPT, so GPt-4 from open A.I., we’re not doing this right. There was a survey asking A.I. researchers, “Will A.I. be the end of humanity?” And 50% of the researchers said there’s a one in ten chance yes, that’ll be it. The A.I. is going to end humanity and be the cause of extinction. And his example, Tristan Harris, goes on to say, “What if that was airplane engineers?” Like, what if you said, Hey, there’s a 50% chance that you get on this plane? And if half the engineers said there’s a 10% chance that you get on this plane and you die. A one in ten chance per flight and 50% of the engineers are saying it’s like that…No one would get on the planes, they’d be grounded forever. We had planes that were three planes that crash out of the hundreds of thousands of flights that are happening, three planes. And it was grounded for two years because that’s not acceptable. We need to do better. That’s the answer with A.I. right now. We just…We’re not ready.
Nathan [00:17:57] Tristan Harris points out we weren’t ready for social media. We weren’t ready for the mental impacts, the social impacts, the massive economic impacts of social media. And we’re still teasing those out. In fact, at the time of this recording, Jonathan Haidt, that guy I’ve talked about before with Jean Twenge, they have a 350 page social media research piece that says social media is not great for kids. Jonathan Haidt, just this past week released an article titled I’m not going to remember the exact title, but it says. We need to get smartphones out of schools. Google it. Jonathan Haidt. H-A-I-D-T. Phones out of schools and it’s full charge. And I can get you some documents on how to push for your school board and for your school leadership to get rid of smartphones in schools. It’s not a joke. It’s not a question anymore. We now know. We don’t know with A.I.. And that’s one of the things we’re concerned about. We just don’t have the information on top of that. Not only are we not ready socially for what this thing can throw at us, we don’t even know what it can do. And there’s two examples Tristan Harris shared one. The first example was we didn’t realize when GPT-4 came out and it was released to the public that it had trained itself to be a research level chemist. Like I had a buddy who worked for Xylogenics before they shut down. He went to work for a different company and in his lab he could make all sorts of stuff and he was doing work with cancer research, but he had the chemicals to make all sorts of things. But those chemicals are under a high grade security. They’re in a very specific building in a specific part of the lab, and there’s a very specific chain of events that has to happen to get access to that in a very specific location. So, yes, he has a lot of information in his head, but he’s bound to a specific time and place where that stuff could happen and there’s lots of oversight and documentation, etc.. But GPT just went to the world and now just anyone in the world who has the motivation now has that research based chemist with them, like our research grade chemist. Like, that’s concerning. There are things we don’t want people in the world making and we don’t even know what that did yet. Like that happened this year, 2023. We don’t know the outcome. We might not know for three, five, 20 years, but it’s out there and someone who’s a bad actor or someone who wants to do something dangerous now can because we released a product that we didn’t even know had that feature. Because A.I., the whole premise is it trains itself. For example, when being trained, GPT-4 was trained to respond in English and the model was doing a great job. And then one day it responded in Persian and it wasn’t told to. No one trained it to speak Persian. It just decided to do that and now is fluent in Persian. That’s concerning because as Jeff Dean, the senior vice president of Google A.I., said about this, we don’t know why. We don’t know why A.I. does certain things. That is concerning. That fits under the category of we’re not ready, like don’t release a product if you don’t know how or why it works. That is deeply concerning. It’s concerning in ways we can’t even fathom. And I think it’s best summed up by a, Oh man, where’s his name? Right here. Dr. David Nguyen, the University of Minnesota, when asked, “Should we be terrified of A.I.?” His answer was, “I don’t know.” This is a professor, someone who is at the forefront of the field, who’s doing this work on a regular basis and actively working with these resources. Maybe you have an opened up ChatGPT. Maybe you have an engaged A.I. knowingly on any purposeful way or tried to learn about it. Maybe haven’t read any books or any articles. I’m telling you, the people at the front of the field, the professors from Princeton who now work for Microsoft, the professors at universities who are regularly working with the simply don’t know the researchers who are doing it, don’t know. Maybe there’s a one in ten chance this thing ruins us. Maybe there’s a nine in ten chance it doesn’t? Like, okay, but we’d like you to be more certain before you just throw this out to the public and maybe double check what it can do and can’t do. That’s our first concern.
Nathan [00:21:55] The second concern with an A.I. is its ability to man just destroy what we understand for information and communication and privacy. So I’ll take those on twofold. Information communication is deepfakes and misinformation is going to be intolerable. Like if you thought social media was bad now, I would strongly encourage you to walk away. There will come a point in the next couple of years where deepfakes this idea that a video was created using someone else’s voice and face will be indistinguishable from reality. You could be on a Zoom call. You could be watching a video, you could be actively engaging this person through text or through social media. And it’s not a person at all. It’s an A.I.. That is already happening. But what we have ahead of us for the next couple of years as this technology really gets rolling, is GPT-5 the next iteration as Sebastian Brubeck shows, it’s significantly stronger than the version we’ve got. And the next few years it’s only going to make stronger versions. You just there’s no way to know if it’s a fake. It’s going to be real time emulation of disinformation and fake information. And that happens on the…Hey, I got a call from someone I love. This happened within weeks of GPT-4 coming out. People were getting calls that mimics the voice of someone you care about so they would be able to end a three second sound clip. And so, for example, you could get a call from me. My voice is on the internet. They could just download it, emulate it, type whatever they want. And it would make a perfect emulation of Nathan’s voice, which a couple of years ago investing firms were using your voice for your security password because like, Oh, well, this is unique to you. It’s not anymore. As of this year, 2023, that’s no longer true. So that’s going to be problematic when we try to use the Internet then to convey truth and hope and political freedom and any kind of ideal, it is going to absolutely shake to the core what we know to be true and to be the way the world currently works. So you will still be able to know reality when you’re with somebody. But augmented reality and virtual reality are actively being pushed as a massive front for the future of politics and faith and the way school works. And that information is getting more difficult to discern and differentiate. So please know that that’s a real thing. And I would say one of the most concerning we see in application right now is Snapchat. Pinned at the top is in A.I. bot for relationship and it simply encourages you to come talk to it. It wants to be your friend. It wants to form a parasocial relationship. And the idea is well people should never be alone, So we’re making a snapchat bot that’s concerning. Don’t do that, don’t use it. Run like it’s the plague. It is not safe. It is not a good idea. It has not been vetted. It is not there for your child’s well-being. Please do not use the chat bot. It might be humorous, it might be funny. It will tell you if you say, “Hey, do you know where I am?” It will say “No.” If you say, “Where’s the closest McDonald’s?” It will tell you so. It is not made for public consumption yet. And while there are absolutely relational and hopeful benefits that an A.I. could help with, that the creator of GPT said he basically wants it to be the Holy Spirit. He wants a constant companion for you that is always giving you the best advice, always giving you the best encouragement, and always helping you become the best version of yourself. Those are his words. I added the Holy Spirit part, but we have a Holy Spirit for that. Like that’s what Jesus won us. That’s what the point of the Gospel is, is to not become better versions of ourselves, but to die to ourselves and become more like Christ. And as soon as your goal is A.I. for self-actualization that I’m going to use this A.I. and I’m going to prove to the world that I am better because this A.I. helped me get there, we’ve already missed our mark. That is not your call. That is not your purpose. That is not your design. And even if you become better at something, you’re giving up a lot because you’re listening, you’re inner compass is directed towards A.I. and at some point you’re going to quit discerning. Or are you going to quit differentiating what is true and what is not because you’re simply looking at will, what is best for me? That’s where things break down.
Nathan [00:25:59] Which brings us to our last piece, which is the relational fractures this is going to make, I think, the most concerning piece when it comes to relational fractures is everything we understand right now is built on trust. Our economic systems built on trust. Dollar bills are just IOUs. Our political system is built on trust. We have to believe those votes were cast. And we’ve seen that start to disintegrate a little bit. Just wait until A.I. gets it’s hand on stuff and you don’t know what’s true and what’s not true. We believe society will function, that people will follow traffic rules, that crimes will be prosecuted and will not be allowed. And as soon as that trust breaks, as soon as, you know, 100% certain someone else is holding up their end of the deal, everything collapses. And that is what A.I. the potential to do. Both on the disinformation side from outside sources, but also on our own end. How can you tell if anything is real, if it’s based on the Internet? It is concerning. Now, the answer to that is generally A.I. is going to be the solution. And you will not put this genie back in the bottle. This is the nuclear bomb of the digital age right now. It is the thing that will most change other than quantum computers, which you haven’t heard much about. I haven’t heard much about it. I actively look for information on them. But quantum computers mixed with A.I. will change the history of humanity. But right now what we have is the A.I. part and it’s not going away. It doesn’t have to be bad. There are some really cool pieces. It absolutely will be used to progress the gospel, mainly because it’s going to be so much more efficient and you’re going to cut through the clutter of outside people’s ability to fidget with what you’re trying to do. We’re just going to need more A.I.. And no, you’re not going to lose jobs forever. It’s not going to put everyone out of employment because there’s still people digging ditches. They just use the machines that are way better. And let’s be serious. Someone’s going to be the first counselor for an A.I. when it gets depressed or when it gets…Right? When it gets off the rails, you’re not going to delete the A.I.. You’re going to have to train that thing back into a healthier state. It’s worth too much and it’s too powerful. So trust me, there will be jobs for humans in a world filled with A.I.. No one complains that they don’t get to wash other people’s laundry anymore, even though that has job has been taken by your clothing machine and your dryer.
Nathan [00:28:15] So note that you will have a place, you have a purpose, you have a soul, and in God’s eyes you have value. What you do in the economy of a world with A.I., I’m not sure there’s going to be some benefits, but there are deep, deep concerns. We’re not releasing this properly. We’re not ready for it. We’re not vetting it right. Disinformation is going to be rife and it is going to affect the foundation of what we can trust. So please engage in a local church body, get tied in in relationship. That is what humanity needs right now, is to be known and to know others so that we can be pointed towards the God who knows and loves us, who died for us while we were still centers and raise this up to new life that we can love others out of the love He’s given us. That’s true in spite of A.I. and it’s true through A.I..
Nathan [00:29:02] So I hope this is encouraging. I hope this made sense and help you kind of understand where A.I. is right now and helps you engage the conversation with your young people when it shows up in social media. Understand that yes, there’s amazing benefit engagement here and there’s some downsides. And no, don’t just trust what A.I. says. The whole reason you still need to learn information is you have to be able to process what A.I. puts out and go, “Is that true?” We still need to discern. We still need to turn to the word and be that kind of mindset of let’s test and improve everything that comes out of one of these. So thank you guys for listening. Please share this with your friends if you think it would encourage them and join us next week as we continue this conversation about how we can love God and use tech.
Follow this podcast: