Education Matters
AI in the Classroom: Education Matters
Season 16 Episode 6 | 57m 28sVideo has Closed Captions
Host Kelsey Starks talks with experts about using and teaching AI in the classroom.
Artificial Intelligence is becoming more common in day-to-day life. So how are schools and educators handling this new technology? Host Kelsey Starks talks with experts about using and teaching AI in the classroom. A 2026 KET production.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Education Matters is a local public television program presented by KET
You give every Kentuckian the opportunity to explore new ideas and new worlds through KET.
Education Matters
AI in the Classroom: Education Matters
Season 16 Episode 6 | 57m 28sVideo has Closed Captions
Artificial Intelligence is becoming more common in day-to-day life. So how are schools and educators handling this new technology? Host Kelsey Starks talks with experts about using and teaching AI in the classroom. A 2026 KET production.
Problems playing video? | Closed Captioning Feedback
How to Watch Education Matters
Education Matters is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship[MUSIC] Hello and welcome to Education Matters, where we take a closer look at various topics in the world of education and break down how it matters to you.
I'm your host, Kelsey Starks.
This month, we examine artificial intelligence AI in the classroom.
We have quickly moved from this tool being hypothetical to now widely used in classrooms by teachers and students in K through 12 education and beyond.
Nationally, more than half of U.S.
teens say they use AI chat bots to help with schoolwork, according to Pew Research, and 70% of teachers say they are also using AI for lesson planning, content creation, and efficiency.
Kentucky schools are leading the way with guidance for using this technology.
Kentucky, in fact, was one of the first states to offer AI guidance to all of its public school districts.
A significant percentage of Kentucky K through 12 districts have already implemented AI policies and guidance within their districts.
We're going to talk more about what that looks like, how it coincides with policy happening in Frankfurt with our panel of experts.
They are here to help us understand this rapidly changing landscape.
And I'll introduce you to our panel now.
And yes, they are all real humans.
I will add Marty Park is the chief digital officer and deputy CIO with the Kentucky Department of Education in the Office of Educational Technology, where he leads the digital learning team and works closely with lawmakers on policy making and implementation for Kentucky school districts.
John Nash is an associate professor at the University of Kentucky's Department of Educational Leadership Studies, the founder and director of the UK Laboratory on Design Thinking, and a member of the Education Subcommittee of Cats AI, which is the Commonwealth AI Transdisciplinary Strategy, a university wide initiative to coordinate AI efforts across the state.
Maria Bennett is the chief information officer and director of technology at Scott County Schools, supporting teachers and school leaders when implementing instructional technology.
And Lisa Sawyer is the digital learning coach and project engagement leader at Johnson County Schools, where she is part of the district's technology team.
Exploring the integration of AI into classroom instruction and student preparation.
Well, thank you all so much for being here with us and talking about this exciting topic really, that is changing so dramatically and rapidly.
We so appreciate your expertise in this.
So first, we want to make a really important distinction.
And that is between using AI and making AI.
And Marty, I know this is something that you've been working to educate districts and policymakers also on that explain why.
>> Well, thank you for pointing out the distinction.
Kelsey.
So it's really important to set the groundwork and foundation on when we think about artificial intelligence in general, making artificial intelligence you really fits nicely into the computer science space all across our state.
In Kentucky, we have computer science pathways, and those are students who are thinking about how they want to enter into the industry and the workforce.
As a computer science expert, making computer sciences and making artificial intelligence.
Also, it's important to distinguish between how we all generally can use artificial intelligence, and we may use artificial intelligence to to make stuff.
But what we think about with policy makers and talking through from a leadership perspective, we all need to wrestle with these topics and how we're using artificial intelligence.
And then we have students who want to go into that field to make artificial intelligence.
And so it's really, it's really helpful to distinguish between those two.
>> Yeah.
And Maria, I know that that's something that in your district particularly, y'all are looking into a pathway for that, correct?
>> Yes we are.
We're falling behind some other districts who are starting to explore that pathway in this upcoming school year.
So we are excited to go in and see what that looks like and how we might bring that back to Scott County as well, to provide that pathway for some of our students.
>> And as well on the university level.
John, you all as well.
>> That's right.
I mean, the University of Kentucky has long had computer science majors, but just recently we've now launched into a major on AI making.
And I think Marty's done a great job of setting the distinction there, because this is important work in a pathway that students can take to enter the industry.
But from a guideline and policy perspective, I think mostly we're talking about users of AI.
>> Yeah.
And students, as we mentioned, are both making AI and using AI.
They are using it to research, study, prepare for exams, and not just students.
Teachers are also using it to transform instruction and grading to.
Laura Rogers takes us inside Bowling Green High School in Warren County, where students and teachers are learning to utilize this new technology.
>> 1910 Blake teaches history at Bowling Green High School, but he's also focused on the future.
>> I've always been someone that's very big into like gadgets and tech.
>> That made him an early adopter of AI and the classroom.
>> I kind of played around with some of the new tools and kind of figured out what would work for me and what I could use day to day in my classroom.
>> Coming back from World War One.
>> He says, AI tools like notebook LM can work as a personalized tutor, and he can make sure the information students access is credible and safe.
>> It's not pulling data and information from the entire internet.
It's only pulling from the sources of information you provide it.
>> You can input up to 50 sources like PDFs and textbook chapters to help students study and complete assignments.
In his AP history courses.
>> I feed in our college textbook, I feed in our review textbook that we use.
I feed in primary sources.
I feed in videos from YouTube that correspond to the content.
>> Says it can also help them study for advanced placement exams without having to purchase extra materials.
>> I can create podcasts for them.
I can create videos, I can create slides, mind maps, I can create all these great study tools, flashcards, quiz questions.
>> Our teachers are using it in a lot of different ways in the classroom.
>> Megan Markham is the school district's digital learning coach.
>> AI sort of broke onto the scene, sort of like an explosion with absolutely no warning.
>> She advises teachers on meaningful integration of technology in their classrooms.
>> We sort of had to scramble to figure out what this was going to look like, and the potential pitfalls and opportunities it provided.
Now, I think we're turning that corner of really seeing the potential behind AI in the classroom.
>> And Melanie Dickinson's English classes.
Students are using Magic School to research conspiracies and mysteries with her guidance on using AI as a tool, not a shortcut.
>> When you have an assistant that can deliver instruction and give immediate feedback and guide the student on a personalized learning path, that's what we want for every student.
AI makes that possible.
AI makes that even easy.
>> It can also make some things easier for teachers, like grading and giving feedback on hundreds of assignments.
>> Grading is just very grueling, and giving that high quality feedback to every single student, whether you have 75 students or 150 students or 200 students, we're human.
So somewhere in that stack of papers, you're going to fatigue, whereas AI does not settle.
>> Both acknowledge concerns over artificial intelligence in schools specifically if it helps students cheat.
>> We had the same concerns with calculators.
We had the same concerns with Google.
The old theory, you can really unlock a great potential for students and their personalized learning if you start using.
>> It, she says.
Some assignments may need to be redesigned, but also there needs to be honest conversations with students about how to appropriately use AI.
>> What's acceptable and what's not acceptable.
You know, using AI to help you brainstorm is acceptable.
Maybe using AI to write your entire essay for you is not acceptable.
>> We're never going to put the genie back in the bottle.
The genie's out.
>> Kentucky plays a huge role.
>> You just have to find ways that it can enhance what you do in the classroom, and it can enhance what students are able to do in terms of preparation, in terms of deeper learning.
>> Much of that learning useful for building the skills they need for college and career.
>> What's known as.
>> Four KET.
I'm Laura Rogers.
>> Wow.
It's interesting to see how just students are using it, but teachers are using it so much too.
And that sums it up.
The genie's out of the bottle, right?
So Lisa, I want to go to you and tell us how in Johnson County, you all are trying to be intentional with how you're rolling this out and sort of what is like the Wild West right now?
>> Yes, I think it's very interesting.
What you said is 70% of teachers using this to design lesson plans.
And so in Johnson County, we're really working hard to integrate high quality instructional resources.
And we've been very intentional about that.
And it's very planned.
It's very scripted.
So what our teachers do, we have instructional learning coaches and we meet on a bi weekly basis.
And we, we make a plan that we're going to roll out across our district.
And so for us, our coaches are working on what we call intellectual preparation unit and lesson internalizations.
And so what we're doing is we're we're looking at when the teachers are actually scripting their lesson and making annotations, they're looking for, say, misconceptions that students may have.
So we're really digging in and showing teachers how, yes, they are the center of this work, but how can AI really hone in and help them go deeper into the work?
So we really want to guide them with our instructional coherence piece and not let it be that wild, wild West, right?
>> Yeah, it's very difficult.
And as, as we talked about to some of the, we saw there, some of the tools that teachers are learning are the same for you all.
>> Yeah.
We see a lot of those same tools.
Magic school is one that I think that our teachers default to pretty quickly.
It's, it's pretty easy.
It's got an easy interface.
It also has a component that allows our students to interact with AI that keeps it safe.
It keeps their information confidential.
It allows the teachers to give certain tools, but maybe not all tools depending on the task.
So I think that's a great entry point.
I liked where he talked about using tools that we already have available to us notebook, LM and Gemini.
Within Google, we already have agreements with Google, and so we're able to make sure that we're keeping our students safe and kind of walled off.
It's it recognizes that they're students and it's, and it's mindful about how it responds to their different prompts.
So we're trying to get our teachers to interact with those large language models and our students interact with them to move beyond just the simple tools to really stay really grounded in the standards and those high quality instructional resources.
So they're using it to enhance it or to personalize the learning without watering anything down or losing those high standards.
>> That's where KDE comes in to giving that guidance.
>> Yeah.
And one of our areas of emphasis here is to encourage, engage and empower the safe, secure and responsible uses of artificial intelligence in the learning space with humans staying in the loop.
That's, that's we say this quite often and unpacking that, you know, safe, secure and responsible, there's a lot to that.
And it's very important.
And you all have called that out perfectly to design experiences for students and teachers that are with trusted partners and avoiding the wild, Wild West approach.
But I think, you know, as pointed out, you know, our working towards what our students should know and be able to do.
That's our Academic Standards Foundation.
That's in all content areas.
And we also have our academic standards for technology, which are those integrated standards where students can demonstrate through their products, through their learning, how they can use technologies, emerging technologies such as AI.
And then, and then our Kentucky academic standards for computer science, as we mentioned, that's that's the making computer AI part.
>> Yeah, yeah.
And we're talking particularly here with K through 12, but that, you know, goes into higher education, obviously.
So are those the same types of tools that are being used in higher education space?
>> Yeah.
And some of the same notions of, well, we don't have necessarily like State standards on what gets taught because with 17 schools and colleges at the University of Kentucky and hundreds of departments, each discipline is going to have its own sort of take on how AI fits within it.
But the idea is that how what is what is responsible use look like within my discipline, whether it's in literature or whether it's in education, whether we're training teachers.
And then, yeah, some of these notebooks are notebook, excuse me, is showing up.
The university has just aligned itself with Microsoft to provide copilot, which is the basically run on Marty.
It's run underneath it is ChatGPT running that underneath it.
But all students, faculty and staff have free like pro accounts to use copilot.
And so it's on the desktop of everything.
It's inside our Microsoft apps, it's inside PowerPoint.
So the tools are there.
Now what remains for us is to think about if we've released it to everyone, are we in front of it enough to think about how to help people use it?
Well.
>> Yeah, exactly.
And so often too, particularly today, students are so far ahead of this technology.
It's kind of hard for teachers or even administrators to keep up with that.
Have you all found that kind of push and pull where students are sometimes leading the way as far as being, you know, integrated so much into this technology?
>> I think I think so, and I think that understanding what our students do beyond campus with their personal technology is very important.
And so what we always think about, we wrap around what we refer to as digital citizenship.
And there there are nine core elements that that get to that ethical, that safe, that responsible, that secure approach that, that we need to teach and we need to learn together in structured environments so that when our students do leave and they work with technologies outside of what we're providing, that we like to say they have the power of pause to pause and say, should I, you know, click, should I tap?
Should I work with this new tool that I'm not sure who published it?
And that's wrapping to what we do inside of our schools is very helpful.
>> Yeah.
So how are teachers getting prepared to be able to keep up with students as they're learning this?
>> I know for us, we have several cohorts that are going on within our district.
So we obviously provide professional development during the summer, but we're also trying to integrate training throughout the year and giving because it is so rapidly changing.
We can't wait from one summer to the next.
We have to make sure that we're keeping teachers informed about changes or, or things to be aware of or new tools that are available to them.
So we do have several cohorts in our district at least, and I know that we have digital learning coaches and so many of the districts in the state of Kentucky and instructional coaches who can can help bring that to the classroom and make sure that we're putting the right tools in front of the students.
And, and we're giving the teachers as much information as they need to, to try to stay on top of it.
>> Has your experience been that teachers are interested in learning more about this, or are there some naysayers who are a little trepidatious about it?
>> I think it's just no different than students.
I think when we're working with adult learners, it's the same.
We have some that were just as we were rolling this out.
They were some that were, yes, we're ready to go.
And then there were some like, no, I don't know.
So we have to just give them that just in time supports to meet them where they are.
And so we have digital learning instructional coaches inside of every building that can provide that unique 1 to 1 coaching.
And we're so thankful for, for KDE guidance and for the digital learning network that they provide.
They have a digital learning coach network that we that feeds down into our network.
So that's very, very beneficial for us.
>> And that comes to teaching the future teachers as well, right?
>> Sure.
And the future leaders as well.
When we talk about that, we talk about how the decisions that get made about how AI gets used in a school often comes down to discussions that are at the community school, community level and sort of setting an ethical principle like what we believe about AI.
And then as we all know, once that classroom door closes and that teacher is in that room, that's a whole nother set of sort of ethical principles that come into play.
Whereas the computer science teacher has a different attitude towards this use versus the literature teacher, the English teacher, the math teacher.
And so if everybody can kind of come to that and decide that what appropriate use is within that, that's also an important discussion to have.
>> Yeah, because it'll be used across all of those areas in some way or another.
Right.
So I want to talk a little bit about the benefits and the pros of, of this.
Obviously, we talked about how teachers are able to be more efficient.
A lot of times they are able to meet students where they are a little bit better by using this.
As we heard, an assistant in the classroom, what are some of the best benefits that you all have seen so far of this technology?
>> For me, I think the ability to personalize learning for our students has been huge for us.
Being able to individualize instruction.
I'm working with a group of special education teachers right now and, and really making sure that we're clear on how to use AI with such sensitive and important information about our students and how to protect that, but how to also make sure that we're getting good resources in front of our students.
And I've got a I've got several special education teachers who are using our profile of a learner as a district, along with our leader in Me curriculum, along with their behavior instruction to tailor lessons that pull all those things together.
She can do all those things that that teacher can do all those things.
But the amount of time it would take to bring all those different components in, to make it specific to her students needs, that's where AI is saving her time, where she's even said, you know, I'm able to go home and be with my family sooner at the end of the night because she's still staying in the loop.
Like you said, the human has to stay in the loop.
These are these are teachers who have pedagogy and tons of content knowledge, but they're able to use that to build these incredible activities with their students that they wouldn't have maybe been able to do in the, in the time span that they have today.
>> Yeah, yeah.
>> I think I was on a call yesterday for the green River Educational Cooperative with Doctor Jim Masters over in Henry County, and some impressive use of notebook LM, but not for students, but for teachers and leaders who have put together notebooks.
The benefit.
If you didn't catch it from the the the piece is that in notebook LM, you're not using AI on the whole internet or the whole world.
You're using it only on the documents you put in your notebook.
And so if you're a teacher, a special education, special education teacher for that matter, you might have all the rules and laws related to building an IEP in a notebook.
And then when it comes to develop the IEP or a 504 plan or something like that, you can have that chat inside that notebook and create customized plans for students along the way.
Same for career and technical education pathways, building plans for seniors as they think about moving into the workforce.
Lots of great advantages here that could speed up the way in which you do that.
And I'll just add that the notebook LM is quite remarkable tool, because not only will it do these things for the teacher, but then all of the sort of studio applications on the side.
So for instance, parents are busy making a plan for a student, someone.
I can create a video of the plan that I can send to the parent that they can watch on their phone.
So lots of options here for people to speed up the work.
Yes, but also more high quality work for more students in the school.
>> And also also go deeper.
I think, Lisa, you mentioned earlier, our students are starting to become pretty vocal about leveraging as a study partner, leveraging different tools that are available when their classroom teacher, their content area teacher is not available 24 by seven approach of being able to structure that content based on the high quality instructional resource that's provided.
So these are vetted materials, vetted instructional content that can then start to work as a partner, you know, a study partner, a shoulder partner, and going deeper in the content knowledge, which which I think our students are starting to share.
>> And going specifically to students who are struggling in a particular area and being able to talk to them and teach them in a way that they understand.
I would imagine.
>> A good example of that is a one of our schools has a newscast team, and so they develop the content every day for this newscast, but now they're able to feed it into AI is their creation, but yet they're taking it a step deeper or further by enhancing those, those newscasts.
So it's just really neat.
The possibilities are endless.
>> That's the truth.
Yes.
And that could be the pitfall as well.
Right.
The same thing.
And so I want to hear from the students, though I did have a chance to sit down with a group of high school students to find out more about how they think AI is informing their future.
Take a look.
Well, thank you all so much for being here.
We're so excited to hear from you, the students, how you all are really using AI right now in the classroom.
So tell me, how are you all using artificial intelligence, creating artificial intelligence and how it's impacting you both in the classroom and in your other daily life?
>> So kind of starting off, the main thing that really comes to mind is my sophomore year in our media arts class, we used some generative AI to just create little things here and there.
It's one of our parts of our graphic design curriculum at Knox Central.
We've used AI before in order to do little things like touching up on graphics and, and reports and whatnot.
I'm a broadcaster, so every now and then I'll use it for statistics.
I'll use it to figure out things that maybe websites can't tell you.
Something that maybe ChatGPT could tell you that, you know, César or 13th Region Media Network can't.
So that's really how we kind of make and use it.
We do a little bit of both.
>> Yeah.
So in school, I.
>> Often use AI to study most of the time.
I know last year, so I'm a sophomore this year.
And last year I took an AP human geography class.
And at the end of the year, you know, if you take an AP test or you take an AP class, you have an AP test.
So to study for that, I use ChatGPT.
I had it make me about 60 questions.
And I just study those questions every night and every day.
And I would, you know, submit my answers.
ChatGPT.
ChatGPT would tell me if I'm wrong or not.
If I was wrong, then it would explain to me, you know, how I got that wrong, how to get it right next time, what to do on the test, stuff like that.
So it's really helpful.
Really.
>> Yeah, I can see that.
And, and Grayson you, you are very involved with artificial intelligence because you are working on, on creating AI.
Tell us about it.
>> Yes, ma'am.
So for my workplace, I actually developed a AI model that will scan laptops and detect damages that they see on the surface that maybe we'll miss.
So I see AI as more of a tool for me, and it allows for me to work alongside it.
So there's a lot of if there's a bigger issue that needs more of my time, I can use AI to kind of help me with the smaller things that don't need my full attention.
And I can kind of work in parallel with the AI and all that.
And how I use it in school is that I use it kind of as a personal tutor.
And that's what I think is one of the biggest strengths about AI is that it can be trained not only to match the curriculum, but it can also be trained for that student.
So I'm a big advocate that students should not use it to cheat, but they should instead use it to learn and have that tutor where they can bounce ideas back and forth and just get to know more about a certain topic.
And what's so great about AI is that it can be used in any topic because it pulls information from everywhere, so there's not a limitation on what it can do.
So it can help a calculus student, or it can help a student in an English class, which is what I see as a huge value in AI, especially in the classroom.
>> It's, it's an interesting time to be a student for sure.
So we heard a lot about each of you all talked about some of the benefits, what you like about using AI.
Let's talk about some of the pitfalls.
What are the cons or have you encountered anything that you think would be a downside to using AI?
>> Absolutely.
Every day you see, you see kids cheat on essays and like, like he was saying, you know, you should use it to learn not to cheat.
Both my parents are educators, so I know the impact that having a real teacher does for you.
So whenever you use those things to become automatic, it, it slows down your ability to figure out problems on your own if you use it.
And whenever you don't have that shortfall, whenever you're taking your end of program testing, or if you're taking, you know, a big test and you're not allowed to use AI, it makes a big factor into that.
>> Yeah, yeah.
What are you guys, have you experienced some downfalls?
>> Yes.
I think I think one of the biggest downfalls of AI overall is people using it to create propaganda or fake media.
Like a lot of times I'll be on Instagram and I'll see some kind of post that's AI generated and it looks like so real.
And I really question myself.
I'm like, whoa, is that really happening in the world?
And I have to actually, I'll actually use AI to check if it's actually happening in the real world, which, you know, it's kind of funny, but I think that's one of the biggest downfalls of it is that anybody has access to it.
So it's just like, if I wanted to get on ChatGPT and make something mean about my friends and post on Instagram, I totally could.
And most people would believe it because it looks real.
Like it looks like that's, that's a real thing that just happened.
>> And then it kind of makes you second guess everything you see, right?
Yeah, right.
How about you, Grace?
>> I really like what Harrison and Reese said, because it reminds me of an analogy that I heard that AI is kind of becoming like it's a tool.
So the best analogy is that it's a hammer.
And if you use a hammer properly, then it's going to be a great tool.
But just as well as it can build, it can also break things.
So like Rita mentioned, that it can be used to social media and it can be used to make up fake things and kind of what Harrison said.
I really liked what he said about students who become too codependent on AI, where they almost, instead of using it to learn, they tell themselves, oh no, I'm using it to learn by just doing the.
You know, the this essay, it's ten points, ten points.
And then it kind of grows into a problem of they're starting to use it on these huge assignments that are worth more and more.
It's not necessarily that it's worth so many points, but it's the fact that these students are losing the cognitive ability to think and go through problems, which I don't necessarily think is the AI's fault that students are seeing that, because I think anything that you have, if there's an easier way to do it, students, everyone's going to want to do the easy way to do it, regardless of what it is.
So I don't think that it's necessarily the implementation of AI, but I think it's more how people are using it that is making it negative in this sense.
>> Yeah, I think about it like when when I was younger and I memorized everybody's phone numbers and this is dating me, but, but now I don't know anybody's phone number because it's right there in your phone, right?
You know, you have your contacts list.
You don't need it anymore.
>> It's easy now.
>> Exactly.
It got a lot easier.
So tell me if there's one piece of advice you could give to or one thing that you think adults should keep in mind when it comes to AI, whether that is rules that need to be put in place, how they can help help you all in the classroom, or what's something that you wish adults knew about AI?
>> It's here to stay.
It's not going to go away anytime soon.
As you know, people think, you know the environmental impact it has and you know, whether or not your opinion on it is negative or positive, it's here to stay.
It's going to be here for a while.
And so learning to use it ethically and responsibly obviously should be our number one priority.
ChatGPT has been around for years.
AI generated content has been around for years.
I'm a junior in high school and not one time has it ever been mentioned in the entirety of my education and I'm.
I'm in a graphic designing pathway in school and not one time.
Is that mentioned?
Is it being taught how to use it ethically and responsibly?
And that is a big issue.
That's something that needs to be figured out because when we talk about these people who are cheating and using them for bad things, it's we they don't know any better.
There's no there's nobody telling them, hey, you probably shouldn't do that.
It's all up to what your morals are and what your ethics are.
And I could use ChatGPT on every assignment, but I'm like, well, that's not really what I think I should do.
But there are people who don't have that, that safety guard, that, that guardrail there to, to help them and prevent that from happening.
There is no like ethics class AI ethics.
You know, I don't have, I don't know if you guys have it, but, but we don't have that.
And it's not even in the conversation of being anywhere.
>> Piggyback off what Harrison said, I think that the biggest piece of advice is that it's time to almost speed up.
AI's main thing is to automate everything and make everything faster.
So for a teacher, they need to recognize this as Knott.
They don't need to see this as a cheating device.
I don't think so.
I think that they should see this as a tool, that they can use the same amount of time, that they can use it and say that it's a cheating device.
They could be using it for making lessons, they could be using it to help students, you know, with grading stuff and all that.
To me, it's a tool.
It is a thing that students and teachers and parents and everyone who is involved in school or anything really, they need to see it, that this is something that can make everything go faster and it's time to speed up and use it the way it should be used.
>> So if I had one piece of advice to give to adults, I would say never stop being curious, never stop learning about this kind of thing.
Because eventually, like he said, we'll be using AI.
I believe in our everyday lives, and I know Grayson mentioned that some people don't use AI right now because of their use case, and they don't really have a necessity for it.
But I think as AI evolves and we continue to grow, I think the AI will be pretty like necessary to everybody at some point.
>> Wow, some really great points there from those students, right?
Yeah.
So much to talk about.
I want to first start about the AI ethics part of that.
What are students or teachers for that matter, being put in place to teach kids the ethics of AI?
>> If I can start, yes, I think that, you know, we've way back three years ago, a long time ago.
>> When we started.
>> Talking about artificial intelligence and generative AI to where interfaces started to be developed for the first time, that we could all start to interact.
You know, we've, we've long believed that really those responsible uses are already integrated and built into policy right there.
Are we all we all have responsible use policies, acceptable use policies.
We all have guidelines that that are in place and we've had for a long time.
So moving from policy to practice and integrated into everyday classroom, you know, we first started talking about, well, AI is kind of like the sunlight, you know, you know, if you get too close to it, you might get burnt.
But also it helps things grow.
And we've shifted again a long time ago to two years ago, maybe now at this point where AI is like the air.
And so it's, it's everywhere.
It's in every product.
Almost by the end of this calendar year, we will likely see 80 to 90% of anything we touch behind a screen will have integrated artificial intelligence into it.
And so learning how to, as our students said, who I would say future policy makers, future leaders, for sure.
But, but really getting into that, that idea, that shifting into practice, into integrated into everyday work, into everyday conversations, we need our students and teachers engaging in these conversations daily because it is fast.
There's the adoption rate of these emerging technologies are, are higher and faster than anything we've ever seen.
And so that's, that's an important awareness and piece of understanding.
>> Yeah.
Faster than policy can keep up with.
Right, right.
And they also talked about, and I know a big concern is that lack of critical thinking, our students learning, learning to get things done, are they losing a piece of the critical thinking puzzle?
>> I think it's all about how intentional we are in teaching students how to think critically with the use of AI.
Yes, one of the things I say on a regular basis is every time we're using AI, there should be a reflection question or a something for students to kind of wrestle with and think about, did I use it as an assistant or did I use it as an author?
What did AI get right?
What did it get wrong?
What did I change?
Was that prompt the best prompt that I needed to put in there to get the response I needed?
So, and I think that's important for us as adults as well.
And maybe we do that a little bit more naturally because we've built a lot of these critical thinking skills.
So I think we have to be really intentional about giving our students.
Let's try this prompt.
What did you get?
What did you think about that response?
What's the giving them the reflection piece and giving them, you know, we, we do sentence starters for students as they're learning how to write.
And I think the giving them prompts or giving them reflection questions is that that scaffold we give them to help them start critically thinking on their own?
>> Yeah.
What do you guys think?
>> Yeah.
>> Well, I think it's never in isolation.
I don't think we we teach ethics in isolation.
I think we model it.
And so I read an article called The Assignment is Dead.
And really what that article was saying is that the assignment is not dead, but how we deliver that assignment has changed.
And so we have to be real intentional, chunking the assignment, having checkpoints, bringing it in, having that human element involved in saying and modeling, okay, this is what you have.
This is fantastic.
Now let's try this with AI and let's see and let's level up.
Let's see where that goes.
>> Yeah, exactly.
There was a lovely comment from one of the students.
He said, I wrote it down.
It slows down your ability to solve problems on your own when it's used as a just as a tool to submit something as a product.
And I think this is a, this is referred to as cognitive offloading.
This is a kind of a real thing now.
And a lot of research is going on right now about the extent to which use of generative AI can lead to cognitive offloading.
In other words, the productive struggle that we want students to do to in the work that they're doing.
And so you're exactly right.
I think the presence of AI in and of itself is not the problem.
What we're discovering is that its presence has been a diagnostic, and the diagnostic is we need to rethink how we're presenting many of these assignments.
If your assignments are eminently AI able, then that's your own moment of the power of the pause to stop and say, how might I redesign this so that I can be thinking about how students can bring critical thinking to the task, bring deeper learning to the task, and not let the AI take all of the work off.
>> We we absolutely.
We used to say like, if you can Google that, the answer to that question, then it's the wrong question.
And so getting to that point of, you know, demonstration of learning, what does that look like?
And there's a, there's an opportunity here that, that we can really, you know, push harder to say, you know, how do we, how do as students and teachers, how do we demonstrate that we're learning and growing and getting better?
And, and there's defense there, there's conversation and interaction, live interaction.
Instead of just submitting or turning something in.
I love that article.
I would also mention there's a great bit of research out of MIT that released in, I think, June of 2025 that gets to this same point of, you know, researching with students who only use their brain, know, technologies versus a group of students who are only using AI.
And what that ultimately does.
And I think the memory, the retention, lack of productive struggle obviously is in there.
The brain activity.
>> Is that the EEG study?
>> Yeah.
>> I just read that yesterday.
It's fascinating.
>> It is.
It's fantastic.
And what the I think very interesting part about it is that first group, the brain only group, when in that research, when they flipped them and then that, that brain only group who's now using AI, their product and performance accelerated because they had the foundation, because they had the base.
And I think that's what our students said.
Exactly.
>> Yeah.
But how do you build that foundation when it's all right there for I mean, you know, it's so easy for them to, to get to.
So I guess that's the big question, right?
>> These are challenging, challenging questions, right?
But important and important to, to try to work together to, to solve through.
>> And we've, we've kind of talked in with our teachers about identifying what is, what do you consider to be cheating, you know, and thinking about, yes, the copy and paste, I think we would all agree is, is absolutely cheating where if they're writing it completely on their own, that's Knott.
But what about all that in the middle?
And what about when a student is able to demonstrate that they have something mastered?
Maybe we wouldn't let them use AI for the initial creation of things, but now they they know how to write that piece.
Now how?
Maybe we let them use AI.
Once they've shown us that they've mastered it, they can use it to kind of move on to the next skill where we might need to still work with others.
So on some of those more foundational skills.
So I think one of the things I tell teachers on a regular basis, it doesn't have to look the same day to day.
You might say this would be considered not being academically, you know, working with academic integrity.
This would be cheating for this assignment, but you can use it in this assignment.
So being, you know, being fluid with what we are considering to be cheating and Knott, but to really make sure that we're sharing those expectations and articulating that to our students and making sure they understand this is the time you're allowed to use AI.
This is the time you're not allowed to use AI.
And having those regular conversations with each assignment.
>> Yeah.
And I love the center for Enhancement of Learning and Teaching at the University of Kentucky has a just a fantastic student AI usage scale that gets to the same point of it's really task by task, project by project.
When you're designing what learning should look like or what the demonstration of learning looks like, and that scale is helpful to understand.
Is it some use?
Is it no use?
And what the in between looks like?
>> There was an interesting finding last year out of Stanford that looked at whether the incidence of students cheating, acting without academic integrity, being dishonest had gone up since AI had shown up.
And the data suggests that it hasn't.
And what we do know is that students have been cheating for centuries, and.
>> As long as there's been a.
>> Test, a test, right?
>> And so the presence of AI offers a new way to do that.
But really, as we think about training teachers and school leaders, it's a question of what is it about?
What is it about the conditions of school that make students want to cheat or act without integrity?
It may be that they don't understand the material.
It may be that they feel too much pressure at home to perform.
It may be that they just want to get good grades, any number of reasons.
And so what the research suggests is that maybe it's not about taking a hard line on AI, but rather thinking about what is it that makes students feel like they belong at the school, that they are learning material that they're interested in, that they can take part in projects that are unreliable, and then that starts to create a culture.
So you wouldn't think that a program about technology and education would come down to a culture issue.
But actually, that may be part of it.
>> Yeah.
Interesting.
Very interesting.
And by the way, we have all of these resources for you online too.
So you can read all of these studies that we're talking about.
But I do want to move on to talk about some of the guardrails and the policy that is going to be put in place, or has been put in place in some instances back in 2024.
As we mentioned a long time ago, that Kentucky Artificial Intelligence Task Force was established to study AI usage and state agencies, identify some of the risks, and discuss ways that state government can help foster innovation through this new technology.
Renee Shaw sat down with Senator Amanda Mays Bledsoe, co-chair of the AI Task Force, to learn more about that.
Take a look.
>> Senator Amanda Mays Bledsoe, thank you so much for your time.
We appreciate it.
>> Happy to be here.
>> A you have been involved in this for so many years, not just on the state level, but the national level.
And you have helped lead task force here in Kentucky to examine what AI means and the impacts.
Tell us about the AI task force, why it was created in the state legislature, and what are the recommendations that you all have fashioned so far?
>> You know, it's a fascinating topic.
Artificial intelligence was something that 3 or 4 years ago, some people knew about it, maybe had an idea, but nobody had any, I think, concept of how quickly it would become part of our vernacular.
And that really the task force was started because it was, well, there was AI in all these different fields.
What are we talking about when we say artificial intelligence?
Do we have any baseline of what we're both meaning when we talk about using AI and how we're using it?
And the task force real goal was just to get some kind of buy in around the legislative body about what are we talking about, how, how we might tackle the responsibility of use of AI?
Is there a government role in this space, especially at the statewide level and for state legislation or for state government should be different maybe than perhaps the private sector.
>> AI in the classroom.
I'm sure teachers and educators feel like, oh my goodness, I mean, it's already here.
It's not the future.
The future is now.
What are you hearing from educators about their concerns about artificial intelligence in the classroom?
Did they look at it as a tool to be used to their advantage, or is there some trepidation around it?
>> Both.
I think it's like anybody else, quite frankly, like me.
You know, you can put a calculator that's a calculus level calculator in front of me and I can do algebra, maybe some math on it, but I won't be able to use it to its highest potential.
Just like if you give a kindergartner that same calculator or you give a grad student, for instance, that that calculator, the ability to use it to its full potential really depends on the user.
And so I think how comfortable an individual is, whether it be a teacher, a parent, and trying to figure out where they are personally actually really impacts how comfortable they are using it in the classroom.
>> How are lawmakers responding now to how AI is evolving and how it can be used as a positive educational tool.
>> And it can be used as a positive?
I think about kids on the spectrum who have different learning disabilities, and when you're able to use AI tools to help that kid, that student, become more successful in that 30 minutes of maybe time with the chatbot.
For instance, I have a student was telling me he has a problem with semantics, and so he was using an AI chatbot to go back and forth of using this particular word in a variety of different ways to help him understand the semantics of that word.
Great use.
He doesn't need a teacher walking him through that.
That's something he can learn by himself.
It's also a great way for students who maybe need some repetitive work on some certain issues.
They can work through a software that's teaching back and forth, especially evaluating how they're doing and can be able to say, the teacher, hey, he still doesn't master this skill or he has mastered this skill.
That's a great use of it in the classroom.
So there's a lot of potential, I think, for individualized learning that will really help kids and I think help teachers on our side.
From a legislative perspective, you know, it's emerging so fast.
Legislation doesn't really work that way.
Legislation is, is kind of the guardrails and the regulations have to be done through the Department of Education and through other regulations.
And so we're going to kind of leave that to them for now, I think, and see how this plays out and where we see maybe they're going in direction we're not comfortable with, then we'll step in.
>> You know, there is some concern about students perhaps using AI, misusing AI, I should say, as a tool for cheating to make it not just easier to understand a concept, but to, in short words, just rip it off.
Right?
And there is a concern about integrity in the use of AI and intellectual property.
And, and, you know, credit where credit is due, right?
Are you hearing that from educators who were concerned about how do I know that what I'm getting from a student is generated from their mind and not from a chat bot?
>> I think if we teach kids on how to use appropriate AI use at an early age and what is a good use, it's kind of like cheating when you take an idea from a source or when you quote the source.
Those are still the same techniques that we use.
When you're writing a paper.
We have to start educating kids early on the same use of AI so that they're not using it as a crutch.
They're not using it as a substitute for their own thinking.
It should be used as a supplement.
And if we can train kids that way, ideally the teacher will know enough about the student and have that relationship to know the difference.
>> Well, thank you, Senator Amanda Mays Bledsoe, who is co-chair of the Kentucky AI Task Force in the state legislature.
We appreciate your time and your expertise and for helping us understand this a little better.
>> Yeah, this will be interesting to follow some of those guardrails that they're able to put up.
And Marty, I know you testified and presented to this task force just last year.
So what is your take on on the task force and some of the high level guidance that they have been able to provide in this space?
>> Yeah.
So first high five to the General Assembly and, you know, forming the task force, testifying with with legislative leaders in this space, they are asking the right questions.
Very thoughtful, deep dive into where artificial intelligence plays in, not just in our schools, but in industry and in our state.
And so from from the task force work, some of those guardrails as, as mentioned, you know, we, we already are working towards including in that responsible use framework, things like the disclosing of the use of artificial intelligence, things like, you know, not submitting sensitive data into artificial intelligence tools that we don't know where those data go.
So very great, really, really great to see leaders coming together to discuss, you know, very thoughtful implementations from, from the task force work we pulled together what we refer to as AI in action across Kentucky in our public schools.
And so we've been able to highlight where the task force work is connecting from a professional learning perspective.
Maybe it's from an integration or implementation perspective.
For example, we have a strong initiative on teacher retention and recruitment, and we know one of those areas that produces a guardrail or produces a hurdle of sorts is that people don't know who you know.
They have questions to ask on how to enter into the field.
And so we've implemented an AI agent that can really help in in a very short amount of time, three months or so, over 3500 interactions with potential teachers in Kentucky.
And that AI in action is really looking at safe, secure and responsible use of, of artificial intelligence in our work.
>> Yeah.
I want to know too on a, on a school level, when it comes to Scott County, Johnson County, what is the guidance that your teachers are looking for when it comes to KDE or from lawmakers?
>> I think initially they were kind of asking, what's the what is the policy?
What is.
>> The can we.
>> Use this?
Yes.
Yes, exactly.
Is it banned?
Right.
>> I mean, that was a discussion, correct.
>> Yeah.
>> So for us, we we put together a committee pretty early on and it included teachers and administrators and parents and community members, like in our, in our local college and also some students to kind of ask, do we do policy?
Do we provide guidance?
Which route makes the most sense?
Kind of like you talked about, policy changes it.
AI is changing so quickly, it's hard for any policy to completely stay up to date on that.
So we ultimately decided on on guidance and being very clear on what our guidance was and making sure that every teacher in our district knew where to find that guidance, knew what the, you know, the nuts and bolts of that guidance was so that they knew what our stance was, that we do have it as a district initiative.
It is something that's important to us, and we want to make sure that we're being on the on the front end of, but also adjusting some of the policies that we already have.
We did update our responsible use policy based on the guidance from KDE office.
We also adjusted some of our code of conduct language.
When we're talking about cheating, we already have a definition for what cheating is.
What we did is we we have a list of examples of that, and we included the use of AI without the specific guidance of the teacher, outside of the guidance of or the expectations of the teacher, and using that line and including that in that allowed us to be very clear on what our stance was on cheating without creating this extremely long policy that could adjust and change.
I think we all agree that cheating, we can call cheating what it is and we can recognize it when we see it.
>> It's it's important.
And your feedback early on was was critical that you know, that from the Department of Ed perspective, we're very clear that we are encouraging, engaging and empowering the safe, secure and responsible.
We needed to say it.
We need to say it.
And I think that's important so that then our school districts and our leaders could come together and say, well, now how are we implementing with those guardrails with that?
You know, that understanding.
>> Gives them the permission to be curious, like the student.
>> Talked about.
>> Exactly when they hear that, yes, we are okay with you using it under these guidelines and this guidance.
But yes, we want you to try to use it.
We want you to think outside the box and think about new ways to to have classroom instruction.
>> Yeah.
Because it is not going away as we talked about.
Right.
Lisa, how about you in Johnson County, what kind of guidance are your teachers looking for?
>> That's, that's very similar to us, but what we really want to focus on too is being fluid.
We don't want to get rigid.
We don't want to be locked down.
We want them to fail forward.
We want them to explore the tools, but in a way that, you know, keeps our instructional coherence in place.
So it's really just getting what they need, when they need it.
And everyone's needs are different, right?
>> And how that policy just, you know, it does take a while.
And, and AI is just moving so fast right now.
So it is kind of hard to keep up with that guidance.
We are quickly running out of time.
So I want to ask you all, all of you, all the same question.
What is the one rule or principle that you would put in place today when it comes to AI?
And John, I want to start with you.
>> Wow.
So many things.
I think, you know, I think maybe it's not a rule or, or something I'd put in place, but I would just say if I was thinking with my colleagues here in a school, if you're an educator, consider whether the work you're assigning is able.
And, and I think also think about what is the ultimate learning goal you have?
What is the learning objective you have overall and those steps that you're taking to have your learners reach that objective?
How much of that is totally chat bondable.
Right?
And if it seems like it is, then start to work with your digital learning coach, work with your with guidelines and think about how you might do that.
That can be a little hard to swallow at first to say, I need to actually it's work, but I think it's worthwhile, particularly since this is not going away and these tools are going to be present.
>> Yeah.
And it's those critical thinking skills.
You got to think outside the box.
Lisa, what's the principal?
You would say?
>> I think from listening to the students, it's listen, listen to the students, listen to what they're saying.
And and don't be rigid, be flexible, be fluid and do what's right for students.
That's what we're all here for.
Right.
And I see AI is a tool.
It's a tool in the toolbox.
And we can increase student engagement with that tool.
And that's, that's where we want to be.
>> Yeah.
Love that, love that.
Maria.
>> I think for me, the advice I regularly give is you don't have to be the expert.
Nobody can be the expert on AI.
It's moving so quickly.
But get in there.
Try wherever you're at in this in this journey of learning AI, just move a little bit forward each time.
Just try to think of a new way that you can incorporate AI.
Be cautious, but be don't be afraid.
Try, try new things.
And for our students and making sure that we're providing the, the best opportunities for our kids.
>> Yeah.
>> So my rule, my principal would be follow everything that they just said.
>> No.
>> I think student teacher voice is critical in this work in order for it to move forward.
And I think that the opportunity to implement your scale of adoption, your scale of implementation, it should be a rule.
Are you as a school leader, district leader, ready to promote specific uses for specific AI?
Are you ready to integrate?
And then finally, understand your current partners that you're working with and understand where they are pulling in artificial intelligence to do the work and to help you get better.
Don't fall asleep.
Yeah.
>> Yeah yeah yeah.
Stay curious.
Well, thank you so much for joining us for this edition of Education Matters focused on AI in the classroom.
I hope you've learned something new this evening to help you navigate some of those topics we discussed.
We have linked to this program as well.
Those resources we mentioned online.
You can find them at ket.org.
Education Matters.
You can view and share this program from there as well.
I'm your host, Kelsey Starks.
On behalf of all of us here at KET.
Have a great night.

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
Education Matters is a local public television program presented by KET
You give every Kentuckian the opportunity to explore new ideas and new worlds through KET.