Podcast

January 14, 2026

Carlos José Pérez Sámano

Episode 146-AI in the Classroom

Clear and practical insight into how AI fits into student life at the University of Northern Colorado.

Host Kinsley Walker sits down with UNC artificial intelligence teaching coach Stephanie Ward to break down what AI really is, how students can use it responsibly, where the limits are, and why writing and critical thinking still matter. From academic integrity to choosing the right tools and understanding environmental impacts, this quick conversation offers clear, practical insight into how AI fits into student life at the University of Northern Colorado. 

 Welcome back to Bear in Mind. Where we talk about all things University of Northern Colorado. From campus life to student resources and everything in between. I’m your host, Kinsley Walker, and today we’re diving into a little bit of a controversial topic. A.I. 

Joining us for this episode is Stephanie Ward, an artificial intelligence teaching coach and assistant professor here at UNC. Today we’re going to explore how A.I. fits into your research processes, what it can do to help you, what it can’t replace, and how to use it without losing your critical thinking skills. Whether you’re new to A.I. and not sure where to start, or you’ve already been experimenting with tools for projects, this episode will give you insights you won’t want to miss. Let’s get into it. 

Thank you so much, Stephanie, for doing this podcast with me today. I’ve always been really interested in A.I. I am even taking an A.I. class. So my first question for you is, what can you tell us about A.I. and Artificial Intelligence? 

Oh, goodness. That’s a that’s a big one. Well, there’s been A.I. for a really long time. So what’s new? And I think what we’re all talking about more these days is generative A.I., where, the A.I. is able to, especially with the text, have more understanding of the context of the words and be able to put them into different- rather than just seeing, you know, what word might come next, which it’s still essentially doing, but it can understand the context, of the sentence, which it didn’t used to be able to do. 

And A.I. has been around for- I think it has been around for much longer than people think. 

Yes. 

People think it is a brand new thing that it just started, that A.I. just came out with ChatGPT, and it didn’t. 

Exactly. I mean, our Netflix queues of what we want to watch next, they’re looking at our patterns, and they’ve been doing that for a while. The facial recognition on social media, that 

was A.I. Those automated phone systems have been using A.I. So yes, it’s been around for a long time. 

And now it is just this generative A.I. that’s a little bit smarter, and it’s scaring people. But do you think people should be scared when it comes to generative A.I.? The whole robots are going to take over the world? 

I have such mixed feelings about that because part of me is like, no, there’s no- knowing what it can do right now. I mean, yes, it’s very impressive, but it’s not necessarily Earth-shattering, and it’s certainly not doing any kind of its own thinking or anything like that. But when I think about how fast that it’s changed, that starts to make you worry, like, okay, how is this going to change moving forward? 

And then just with all that change. I know that those environmental impacts are definitely there. I mean, I feel like everyone is using A.I. right now, in one way or another. How do you think we can limit the environmental impacts, or I guess, change how we’re using A.I.? 

How to manage that. Yes, and we’re using A.I. in ways that we don’t even know. I mean, it’s being integrated into our emA.I.ls and our browsers and, a little bit more out of our control. But, I think with the environmental impacts, first of all, everything we do, we decide to get in the car, we stream our shows. All of those things are consuming energy. And we’re making those decisions. So I think we have to think about A.I. in somewhat the same way. Like, being responsible about how I’m using this. Is this something I need to turn to A.I. for and take up those resources? Is this not? I guess, making those decisions about responsible use of A.I., that we also make perhaps in some other areas of our lives around the environment. So, yes, A.I. is definitely adding to a huge amount of energy consumption, but it is not the only thing. There’s so many other aspects of our lives that are contributing to that. 

And you said the appropriate use of A.I. and being aware of how much A.I. you are using and using A.I. in a smart way. How do you think students or anyone, what do you think is that line? This is using A.I. in a smart way, versus maybe we shouldn’t be generating videos of SpongeBob doing a cartwheel? 

Exactly. I mean, we love to play around with the image things, making those funny images or videos. Do you really need to do that? You know, I think. In my own work that I’ve been doing, I’ve been using chatbots and doing all these different things, and then I discovered Agent Mode, and it was a huge wow. But I can only imagine the amount of resources that was taking up. And so it was like, no, I’m not going to use this because I’m using it to play around with it rather than using it for a very real purpose. So I think that plays into it, but again, some of that’s out of our control because it is integrated into some of these things. Although I think you can, you know, if you’re really conscientious, I think you can turn off settings and go around and try to manage 

that a little bit more. But yeah, it’s that is a hard issue. The environmental one is a hard issue because I don’t know, as an individual, you certainly can only do everything you can do. But it’s it’s already out there and so prevalent. Yeah. 

And then just at least in my class, we call some of the videos and photos people make just A.I. slop. You don’t need that video, I mean, we had a competition recently, with a bunch of different schools all over the country, of making a minute-long A.I. journalistic news story about this Olympic swimmer or something like that, I don’t even remember. You really have to continue to re-prompt A.I., or it does not understand what you are saying at all. Just the constant, like I need a video of a man diving into a swimming pool, and then all of a sudden his legs are on his head, and his ears are like flying off. So do you think that will get better in the future? Just like the prompting. Do you think it’ll get smarter? 

It will. It will definitely get smarter. And that actually is a good point as far as the environmental responsibility. Because if we can create better prompts in the beginning, then we are doing less of that repeating. I have even started with some of my really long prompts. I craft those in a Word document and then copy and paste it. And hope that I’m getting more at what I actually want from the A.I., so that I’m not spending a lot of time interacting with it. So yes, I think that is definitely a way you can help be more responsible, and your use is thinking about your prompting. What is it you’re wanting to get out of the A.I., and how do I ask it to do that? 

I see a lot of videos online of high schoolers complaining when they have to write an essay by themselves without A.I. or anything like that. So I mean, it’s definitely a point in my life if I’m using A.I., I want to sit down, and just like I need a pen, I need paper, and I need to write because I feel like I start to forget after a little bit. 

Yes. And there has been research. I mean, over a long period of time, that writing is actually a part of learning. So writing really helps us with our learning process. Hence the reason we have always doing so much writing, even outside of writing classes. And now A.I. can do that writing. But we still need to do the learning. And so we still need to be able to write, to help us with that learning process. 

And then with that learning process, and just students kind of using it, using A.I. for school. How do you think students can be aware, like, where is that line also? Can I use this A.I. on a paper? Can I use this A.I. on a project? There’s so many professors that in their syllabus that says, no A.I. allowed, you’ll be immediately docked points or kicked out of class? Where do you think that line is? 

And that is a really, really hard one. And I do feel for students because it’s human nature to want to take an easier route and use something that can do something for us. Yes. I think very first thing is you do need to look at the professor’s policy and you do need to follow that. You know, if they don’t want you using A.I., you know, don’t use A.I. When it comes to that line, I think the 

biggest distinction is do not turn in any work that A.I. has written for you. If you’re just copying, pasting, or A.I. is finishing that test for you, that’s an issue with academic integrity. You are not the author. You need to be doing your own work. If you’re using A.I. to help you get there. I mean, the research has shown that if you are drafting an essay, and then you get feedback from A.I., that actually is cognitively even more helpful to you. So there are ways that it can be very helpful, and it can be such a useful tool. So yes, I think knowing that line is really hard, and I think faculty need to help students identify that line. I think there needs to be a lot of clarity in their assignments. I think there needs to be a lot of discussion or talking about it, and about that line in their classes, you know, I think it can be discipline-specific. And so that plays into it as well. So yes, I think that is a struggle. I think a lot of times students aren’t even necessarily aware of when they might have crossed a line or even known that they’ve crossed a line. And I think that’s fair. Some of these tools, it can be, you know, you use Grammarly, and you think, oh, I’m just, you know, cleaning up my writing, which is great. That’s a good use. And then it’s changing entire paragraphs, and then maybe is that writing then the paper for you? And it’s hard to know that line. And so I do think all of us need to be more conscious of that, but faculty need to also help with that. 

I do love Grammarly. I’m really bad at knowing where commas go, and Grammarly always tells me right where it goes. 

And even plugging in, I’ll write my own papers, and I’ll put it into ChatGPT or Gemini, and I’ll be like, grade this from ABCDF, tell me where I’m at. And that can be also really helpful. Just to have like a second opinion if you don’t have the time to actually find someone to do that, just to have that second opinion of, oh, well, this is okay, but you could change this because this doesn’t really match with your first paragraph. That is really helpful. But also, do you think professors in general, do you think that they should be totally against A.I.? Do you think that they should be so for A.I., or do you think they should be somewhere in the middle? 

I think there’s it’s going to be a combination. I think there will be situations, I think about probably some of those first semester or first year writing classes, where writing really is the skill that’s needed. And I think, there’s a real reason maybe professors would say, I really don’t want the students using A.I., but they need to know enough to be able to talk to the students about why and talk to the students about that learning process, about cognitive offloading. Instead of just this, this flat. Don’t use it. They need to talk about the reasons and the parameters around that. I think in a lot of classes, A.I. can be incorporated quite effectively. So I’m hoping that faculty will start, you know, going more towards incorporating A.I. Having students use A.I., thinking about what skills or tasks can I do that aren’t necessarily that important to their discipline or the content that they need, and incorporating that, and then focusing on that very specific content discipline knowledge that the student needs to come away with. But that is a lot of work. Like that’s a big change for faculty in a lot of ways. And so, you know, it’s something I think a lot of them are working on, and hopefully we’ll get there. But I do think there are a few 

instances where, yeah, there are probably reasons where you just need to not use A.I. 

How can you train A.I.? Because it is a trainable thing. 

It is. And so that if you want to train it for you more personally, that is where you you do need to take advantage of the paid versions of these, because then it’s consistent with knowing your work and what you are interested in. I think the free if you’re, you know, have an account with a free version of there can be some of that, but that personalizing it to you really does help when you get those unpaid versions, and it sees the nature of how you write your prompts and what you’re asking about. And if you’re uploading content, what content are you uploading? And then there’s these A.I., you know, the, the RAG. I can’t remember what that stands for, but where you’re, it’s curated content. And so that’s the other way is you can make sure that the A.I. is using content that you have put in, and you want it to focus on that, and are training it just on that. And that’s where I mean, I think that’s the direction we’re really going to go with A.I., is to that curated content that you interact with, and that helps with the accuracy. And reliability, all of that stuff. 

And then due to, I mean, if I go in and I train A.I. to think like me, talk like me, it’s going to be a little bit more biased. Act how I act. How do you train that out of an A.I.? How do you maybe fact-check what the A.I. is saying? How do you make sure that it’s not lying or giving you sources that are wrong? Stuff like that? 

Yes. So, evaluating A.I. content, that is an important piece that, hopefully again, faculty are talking about incorporating because we do want to be aware that A.I. has no concept of whether something is right or wrong. 

Then earlier, you were talking about how to have that A.I. more personalized to you. You would probably have to go with the paid version. Do you think that the paid version is worth it for students? 

Yeah. Overall? No, I think there’s free versions that can do most of what you would need. You might want to use several, like Perplexity is much better at the research, and being able to point you to sources of where it’s getting its information. So there might be some using several different free tools. You know, this is where I struggle because, you know, as a university, we don’t have an A.I. tool at the moment. We do have Copilot, but it’s the basic version. And where I see this is the paid versions do offer a lot of advantages. And they offer a lot of advantages in education. You can create chatbots and all kinds of interactive things to work with students, like simulations of, you know, a nurse interacting with a patient. And you can simulate all of that. That requires those paid versions of A.I. You can’t do that with the free versions. So, no, I would say generally, students are probably good with the free versions. Again, perhaps for some of those educational purposes, I hope that there’s some discussion along the line about that issue, the paid versus free versions. 

And then just with all the different types of A.I. out there, do you think it’s necessary for students to use multiple different kinds of A.I., or do you think they should only use one, if you know what I mean? Like, should they use perplexity and ChatGPT and Sora and Gemini? If they’re getting help for a project, or should they just use Sora? 

Well, they definitely. You definitely want to make sure you have the right tool for the job. And some of them are really good. You know, ChatGPT does have a lot, like it has the image stuff, it has the video stuff, like it has a lot built in. So that is one advantage of that. But if you’re doing research, you know, ChatGPT, it does have the deep research tool, which I think is just the paid version. That would be helpful, but you would want to make sure you’re going to a tool that’s going to do that. So yes, it is very important to understand what you are wanting the A.I. to do and getting the best tool for that. Again, for probably most of your school work, I’m not sure that’s as important, but certainly as you’re thinking about your career and on the job, and your discipline, what are the tools that would be better in that discipline? Hopefully, faculty are investigating this, and that’ll be part of what they bring into the classroom, which is, you know, this size space is great for, you know, the scientific research and open evidence is great for the health sciences research, and start to help with knowing what the right tool is for your content. 

And then just a silly little question, what’s your favorite A.I. to use? 

I use ChatGPT the most. I have the paid version, and I do use it a lot. But for research, I really, really like Elicit, that item you just using the free version on that. But for that’s probably for research, the one I use the most. But I found, ChatGPT to just be able to do- like I’ve been creating these chatbots. I even was experimenting with the agentic mode, which was just kind of blew my mind. And yeah, it just can do such a variety of things. 

The agentic mode, you said? What is that and what does that do? 

So that goes out and does things for you. So a chatbot basically it’s interacting with that content that you give it. It is trained. So it is bringing in knowledge from its training, but it’s not able to go out and take care of something, like if you want it to book a hotel for you or book a flight for you. And what I was experimenting with, and this goes back to the environmental responsibility. I really limited it because I realized, wow, this is a lot of resources. But, I was taking, you know, like an example paper, for grading, and you can run it through a chatbot, but the chatbot, like, it can grade the paper based on your rubric criteria. But what I wanted is it for to be able to go out and check the student sources and see, does that source say what the student says it’s saying? Agentic mode can do that. It can go out and find the article, or try to; in some instances, it’s not able to. It might be behind a paywall. Different reasons, but it tries to find the article and then can match with what the student is saying. So it’s basically another step that agentic mode can take another step. 

That is really nice. I know ChatGPT sometimes, when you put in an article, and you’re like, find a quote for me starts pulling out quotes that don’t even exist when you’re not paying for it. But yeah. Is there anything else you want to add? 

I guess just don’t be scared of it. Do you know, I get that there’s the environmental concerns, some ethical concerns, concerns about the misuse. I definitely get all of that. So be cautious, but don’t be scared of it. Play around with it. I read somewhere that it takes about ten hours of playing with a tool to get a pretty good understanding of that tool and what it can do. So just yeah, find some reasons to like if you want it to help with something in your own life or with school, and play around with it. 

Perfect. Well, thank you so much for coming on the Bear In Mind podcast. 

Thanks for having me. 

This was so informative. I hope that someone listening at least one person will be like, maybe A.I. is not that bad. They can figure out a resource that will help them. 

Yeah, hopefully. 

But thank you so much. 

Thanks. 

That wraps up this episode of Bear in Mind. A big thank you to Stephanie Ward for joining us and sharing her insights on how A.I. is reshaping the world. Thank you so much for listening to this episode of Bear in Mind, where we keep you connected to what’s happening around the University of Northern Colorado. I’m Kinsley Walker, and I’ll catch you next time. 

Topic