Connections with Evan Dawson
New AI literacy course aims to prepare high school students for a very different future
10/7/2025 | 52m 21sVideo has Closed Captions
Geneva High launches NY’s first AI literacy class—teaching students to use, not fear, the tech.
Geneva High School is offering what its teachers think is the first AI literacy course in New York State. The goal is to help students become literate in the many forms of artificial intelligence already available. We meet the teacher, who thinks that while students can't be allowed to simply cheat using AI, they also shouldn't be asked to become luddites. So, what is the right balance?
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Connections with Evan Dawson is a local public television program presented by WXXI
Connections with Evan Dawson
New AI literacy course aims to prepare high school students for a very different future
10/7/2025 | 52m 21sVideo has Closed Captions
Geneva High School is offering what its teachers think is the first AI literacy course in New York State. The goal is to help students become literate in the many forms of artificial intelligence already available. We meet the teacher, who thinks that while students can't be allowed to simply cheat using AI, they also shouldn't be asked to become luddites. So, what is the right balance?
Problems playing video? | Closed Captioning Feedback
How to Watch Connections with Evan Dawson
Connections with Evan Dawson is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship>> From WXXI News.
This is connections.
I'm Evan Dawson.
>> Our connection this hour is made in a high school classroom where an English teacher is pouring over an essay.
That teacher feels reasonably sure that the student who has produced the essay has cheated.
It's no secret that students are using ChatGPT and related A.I.
programs to create term papers or poetry, or research, and more.
But the teacher wonders, how can I prove it?
And what am I supposed to do about this?
How can I teach A.I.
a different way?
Some schools are trying to ban A.I.
from classrooms at Geneva High School in the Finger Lakes.
There is a forthcoming class, a new class on A.I.
literacy, and it might be the first of its kind in New York State.
When we heard about this, we wanted to understand how the teacher plans to approach the subject of teaching artificial intelligence, knowing that students are growing up in a world that will change dramatically.
What should they know about A.I.?
How can they use A.I.
to become better students or better thinkers?
Instead of outsourcing the work to A.I.?
What about the ethics involved here?
There's a lot to talk about, and we're glad to not only have the teacher, but a couple of students who are also serving as teaching assistants.
Welcome to George Goga, a teacher at Geneva High School.
Thanks for coming in.
Thanks for making time.
>> Thanks for having us, Evan.
>> And across the table, welcome to Vivian Hoang, who is a senior at Geneva High School and a teaching assistant as well.
Welcome.
Thanks for being here.
Thank you.
And Payce Chu-Lustig is a senior and a teaching assistant.
Welcome.
Thanks for being here.
Thank you.
So, George, I'm going to start with you here.
You reached out to tell us about this.
And I was instantly intrigued because I figured we would start to see some kind of A.I.
literacy.
I hadn't heard about any examples yet.
To your knowledge, you're not the only, but maybe the first to launch when you launch in January in the state?
Yes.
>> I believe so.
>> Okay, so, you know, it's there's not necessarily a paradigm for this.
Take me through a little bit of your mindset about why you wanted to create this class, what the response of the district was.
Give me some of the background there.
>> A couple of years ago, we started to see a shift in the discourse surrounding artificial intelligence at the secondary and at the post-secondary level.
I think a lot of news outlets, a lot of cultural outlets started to frame artificial intelligence as a savior, a new tool that's here to fundamentally reorient how we engage with one another.
And on the other hand, we had a section of people who were deeply concerned and troubled about how education is changing in the face of artificial intelligence.
And so I took a look at the landscape for education.
I also took a look at some of my students who I know were going to join this landscape after graduation.
I teach at Geneva High School.
I'm also an adjunct professor at Suny Geneseo, and so I have the honor of working with students in high school and then undergrads and graduate students as well.
And so I sat down and asked myself, how can I prepare students to enter a world where A.I.
is both a tool and a problem?
And I built the course.
The course is called Artificial Intelligence from Basics to Breakthroughs.
It starts this spring semester at Geneva High School.
I built the course with an acknowledgment that any serious conversation about artificial intelligence in the educational world needs to balance ethics, philosophy, environmental studies, and all of the different fields from which A.I.
pulls and in which A.I.
participates in.
You asked a little bit about our district as well.
Geneva City Schools has really been blessed with what I think is the most reasonable approach to artificial intelligence in New York State.
There are a lot of different models we could have drawn from a prohibitive model, a restrictive model.
There's school districts that ban its use entirely.
At Geneva, we took a step back and said, this is a critical literacy that students deserve to understand both its limitations that you spoke about a minute ago and also its capacity to make change.
And so that kind of gave Genesis to the course and to where the course is going to start in just another semester.
>> So just following up that point on how your district views A.I., one of the things that we're going to be working on on this program is an effort to understand how districts all across our region and how higher education is creating policy for educators.
And by that, I mean, if you're a teacher at a high school, is there a policy for you or is it up to you?
Is it up to you to say, hey, you suspect a student cheated?
You can handle it.
We'll support whatever you decide.
Or is there a blanket ban policy?
Is there a discipline policy?
It's pretty ad hoc.
It's very different from place to place.
And some teachers have expressed frustration with that.
So let's just start with that in Geneva, has there been a district policy that has helped guide?
I mean, what does that look like?
>> Yes.
There is.
We actually have an acceptable use policy for not just technology, but artificial intelligence specifically.
And this guidance came out actually just a few weeks ago.
Now, given some of the think tank work that our district.
But as you said, other districts are participating in as well.
And so it is not ad hoc for our district.
We do have very specific guidelines at the instructional level, at the district level, certainly at the students and family level as well.
And I think to our credit, it's really done.
Good work to control for some of the issues of a more ad hoc model where you see potential opportunities for litigation opening up when these types of conflicts do occur between students and teachers, teachers and families, so on and so forth.
>> This is not a full hour conversation about A.I.
cheating.
That is sort of tangential.
This is about the class that's going to be taught at Geneva.
And frankly, I think the kind of class that's going to be taught everywhere soon.
And it's a consideration of what students do need to know.
I mean, I was telling the panel before the program that I was talking to my 13-year-old recently, and it's like, I understand you need to learn what these tools are, how they might apply, but I want you to understand that these tools can be used to level up your thinking, your understanding, your skills, or you can try to outsource all of your critical thinking to these tools and get away with it.
And that's what scares me.
But I think a literacy course will probably help people with that, I think.
So I think this is necessary.
I think it's coming starting in Geneva.
We're going to see it elsewhere.
we're going to get to that in just a second.
Let me ask you, in the last two years, maybe three, maybe September of 2022 is when we first saw a lot of ChatGPT.
Have you seen instances of students using it to cheat, to outsource their thinking, to write whole papers?
Have you has that been an issue?
>> I have, but I don't think my experience is unique in that regard.
>> Definitely not.
Yeah.
And so when did you see enough of it where you thought, wow, this is moving fast.
This is becoming very, very common.
We've got to get ahead of this somehow.
>> It actually happened in a conversation with one of my colleagues at the college is who teaches in the sciences.
And he and I were talking about the kind of plethora of academic research article creation that had come as a result of platforms like ChatGPT, perplexity, so on and so forth.
And the conversation then led me to consider a little bit more of the instructional level.
What my students were doing.
I teach a writing course at Geneseo, and so how A.I.
is shaping writing, I think, is probably the most fundamental conversation that you can have at the moment when you're teaching undergrads about writing.
And so I took a step back and acknowledged that, yes, academic dishonesty was happening, like you mentioned.
And at the same time, this is also a tool that we might be able to develop our students capacities and to see it more so as an aid.
And I think there's a variety of metaphors that we might talk about later for how we envision our relationship to artificial intelligence.
>> Your literacy course is going to be aimed at what grades.
>> This is open enrollment, really nine through 12.
Most of the students at the moment are in 11th and 12th grade who have enrolled.
>> Is there an argument for doing it even earlier?
>> Perhaps, although in just talking to Pace and Vivian a little earlier, we.
And there's good developmental research surrounding this as well.
There's really an argument to be made about holding off on some A.I.
instruction until really 11th and 12th grade, largely because by that level, you hope that teachers have developed the capacity and students to think critically about their own position within their education, as opposed to seeing it as something that they could use to write an essay to complete a homework assignment with.
>> I talked to a number of teachers, high school teachers three years ago when we really started to see this as an issue.
And they almost to a person, thought, look, if it's not a human being creating an essay, I'll know I'll be able to tell three years later.
It's really sophisticated.
I'm always reminding myself the good advice I had from someone who works in tech who said, always remember that today is the worst it will ever be at everything it's doing, whether it's creating music or poetry or essays or presentations or whatever it's doing.
And so do you feel, George, like if a student hands in an essay and you know nothing about the work that they did, that you would be able to tell if I gave you ten essays and I told you five or A.I.
and five are human created, do you think you could pick them out?
>> Not with 100% accuracy, no.
And my caveat to that is I think that reaffirms the need for literacy course.
And, you know, we're talking a lot about writing, and I anticipate we might talk about things like visual media as well, conversations about deepfakes, for instance, and the way in which that type of spoofing is becoming increasingly hard to detect by everyone.
>> So why then, a literacy course?
Why do you think a literacy course will help with the issue of A.I.
assisted cheating?
>> Well, I think literacy course establishes different ethical and philosophical frameworks for how to have this conversation.
I love having this conversation at all kinds of levels, whether it's with students, whether it's with the general population, whether it's with other teachers and professors.
And it really boils down to how do we envision the form of the technology that we're using?
The framework for how it's used acceptably or not acceptably, in a pragmatic sense, if it's going to help or if it's going to hurt, I think that's a conversation that, quite frankly, we don't have the experience to level on at the moment.
And so I'll be curious to see where it does go two, three, four years from now.
But at the larger, I think, philosophical, ethical level, it's going to develop a capacity to think more critically about when this type of support is appropriate and when it isn't.
>> What do you say to parents especially, but maybe even students who feel like what I want to do is just shield myself or my child from A.I.
entirely.
It's not a good thing.
It tempts too much in the wrong direction, and we will do the Luddite thing.
What do you say to them?
>> I think that prohibitive model actually has its place.
You know, I came onto your show thinking that we would have this general conversation about frameworks and about the philosophy of A.I.
And I envision that we might talk about the argument that we should just shut this down.
And it's a conversation I've had with colleagues.
It's a conversation I've had with school district leaders in the past.
And quite frankly, it's a conversation that we just had over lunch with my two students here about what is the benefit of engaging in this debate, or should we simply shut it down?
And I think there's arguments to be made at younger ages for being more mindful to how we introduce students to A.I.
And quite frankly, I don't think it does have a place in classrooms before 11th, 12th grade.
Really at this point, since this technology is so new to thinking in the way that we're using it.
A.I.
certainly not new, but it's new to how we're developing these competencies.
And so there is value in a prohibitive model to it.
And there's also value in a permissive model.
Once we are dealing with students whose level of maturity, whose level of intellectual rigor when it comes to this field, can really support the ultimate outcome, which is it's a tool and it's there to assist.
>> And do you feel it's part of your responsibility to make sure students are graduating with some understanding of how these tools work, given that they're going to be in the quote, unquote real world and may have to use them.
>> I do, and I take that responsibility quite seriously.
In fact, our mission statement for Geneva city schools is that we will educate and graduate all students to live lives of consequence.
And so with a life of consequence, I think, is a life in which you're able to engage with different technologies, engage with different frames of working with people that haven't been developed yet, and perhaps will be developed within our lifetime, and perhaps will be developed within the lifetime of those that come after us.
>> We're going to go to school in our second half hour with George Goga, and we're really going to dive into what this class that launches in January.
>> You said in January.
>> Launches in January is going to be like an A.I.
literacy course that maybe some of our listeners are going, can I sit in on classes like that?
We all need that.
I want you can you, George, can you introduce us to Pace and Vivian a little bit, and why?
You wanted to invite them to this program today?
>> Absolutely.
So Pace and Vivian are not just leaders within our community in Geneva, New York, but they're also my teaching assistants for our AP English program.
They're great writers, they're great scholars.
They're also future college students.
And so when I sat down to envision what this segment might look like, I certainly wanted to share some of the good work that I know our community has been doing, but also wanted to highlight some of the voices that students know and that they trust.
And so they're leaders within our school, and they're leaders specifically within our English department, because they share in some of the responsibility that is preparing students to enter a world where there's a serious premium placed on writing and good thinking.
And so I'm honored that they decided to join me here today.
And and I hope they'll get a chance to share a little bit about their experience.
>> Well, let's ask Vivian and Pace a little bit about that.
Vivian Hoang.
let me start with you.
Why do you do you love English?
Do you love writing?
>> I do love English.
>> And why is that?
>> I think it's super important to be able to communicate what you really think and communicate.
Those important values and morals and beliefs.
And I think English plays a giant role just in society within students, allowing them to be able to express themselves.
>> So you've gone through out your high school career now, your senior?
>> Yes.
>> Graduating in the spring.
>> I am.
>> And do you know where you're going yet?
>> Not yet.
>> Okay.
But going somewhere?
Yep.
Okay.
throughout your high school career, you get to high school at this age.
Where you obviously have this budding interest in writing and English and expression, but the gpts of the world debut, right?
When you're probably a freshman.
And so I want to know how you've seen those tools get used.
And I'm not asking you to name any names or do anything awkward.
I just want a general idea for how you've seen the impact over the course of your high school career about these with these A.I.
tools.
>> Yeah, I think a lot of the talk about it is that it's very detrimental to students just with their critical thinking and, you know, their ability to write efficiently.
I kind of look at it in a different lens.
I think if you use A.I.
correctly and you use the right prompts or you look at it in a different lens, it could be very helpful.
you can use it to organize different ideas.
You could suggest new ways to phrase something into A.I., and it could be very helpful in that way to spark new conversations, to spark new thoughts.
So I don't necessarily think that it's this detrimental thing that everyone thinks it is.
And kind of.
>> Do you think it can be?
Do you think students occasionally do use it to to try to just do the work for them?
>> Yeah, 100%.
Which is why I think this course is so important.
I think it's important to teach this type of literacy to students so they can manipulate it in a way where it's not going to hurt them in their way of critical thinking, but aid them in doing so.
>> You gave a few kind of examples here, but if you could take me through how you would approach an assignment and if A.I.
has a place, how you would use it.
>> like an assignment, like if I were to get an essay prompt or something, I guess if I had, like a general idea, I could ask it, oh, how should I organize this type of idea to make it fit?
This type of criteria?
I guess if that makes sense.
I know another thing that I've done before, just to help me within Mr.
Gorgas course is put my notes into A.I.
and ask them to help me make a study guide or ask them to help me understand it in a way that could be more comprehensible to me in that sense.
>> But those are your existing notes.
>> Yeah.
>> And what you are describing to me sounds like you're looking for suggestions on organization or organization with the existing notes that you have generated.
Right.
And then you ultimately get to decide, I like this organizational flow.
This makes sense to me.
This doesn't I'm going to be the ultimate sculptor and creator of the final piece here.
>> Yeah.
>> So you don't feel like it's not your own work?
>> No, not at all.
I feel as though I do have the final say, and that it's ultimately my work.
It's what I'm presenting.
I guess A.I.
is kind of just a tool, an aid in that.
>> Okay.
and the class that that Mr.
George Goga is creating here is probably going to be the kind of class that's going to be in a lot of schools.
Right?
And I take the point that maybe not middle school, maybe not elementary school for different reasons.
I think there will be debates about that.
but what do you think, future students, what do you think middle school students in your district ought to know about this?
What do you think should be in this literacy class?
>> like what they should know about the course?
>> Well, about A.I.
in general.
>> A.I.
in general.
I think like I mentioned earlier, I think they should look at it and use it as a tool and not as something that's just going to give them a product that they can give out to their teachers and get a grade back on.
I think if you use it in a smart way an ethical way, I think it could be very beneficial.
Yeah.
>> I this is like the part of the exact conversation I had recently with someone who works in A.I.
development, and this is how I mathematically think about this.
And maybe Vivian and Pace are going to get me off the cynical side of this.
So I see A.I.
as being allowing human beings to do if we're going to put it into boxes, we can bifurcate it this way.
You can do one of two things.
You can level up your thinking.
You can get better at anything you're doing.
You're doing drywall at home and you're not good at it.
You can learn ways to do it faster.
What are common mistakes?
I'm trying to, you know, I'm trying to try to do an oil change at home.
I'm trying to write a presentation and create something.
There's a million different applications, and I can use A.I.
to help me level up, and I get better, I get smarter, I get sharper, I get better at a task.
The other side of that is I can outsource to A.I.
the creation.
I say, hey, I've got to do a 30 minute presentation on this topic.
Here's a few parameters.
Give it to me.
An A.I.
basically does it, and I walk into that that hall and I do the full 30 minutes that A.I.
gave me, and it would probably be, I guess, acceptable.
Maybe.
I mean, there's some problems, but it's getting better.
My concern, Vivian, is I think 95% of people are going to do the latter and like 5% are going to do the former.
And I want to be wrong.
Do you think I'm what?
How would you do the math on that one?
Are people going to use this to be better essay writers, to think better, to organize their thoughts better to learn better skills, to do things more efficiently and better?
Or are people going to outsource to A.I.
the tasks and go, well, that was easy.
I didn't have to do it.
A.I.
will do it for me.
What's it look like?
What's the breakdown?
>> I think ultimately, if you're using A.I.
for something like that to get a product to give to a teacher you're looking for something good.
You're looking for something you're going to get a good grade on.
But ultimately, I think your goal is to create something, like I said, good.
But like your goal is to better yourself in that way.
I think if you use A.I.
as a tool, you're really wanting to better your own skills.
I don't think that everybody is going to just go to the shortcut and just get that.
>> That's a wise answer.
Now, Senator George Goga, Senator Hoang dodged the question from the panel here.
So what's your mathematical breakdown, George?
I mean, like, I, I don't know that it's 95 five.
I worry about that.
What do you think it would it look like?
>> I to be honest with you I haven't I share your cynicism.
I think a lot of people, when I enter this conversation might see me as being on the other side of that.
But I do share your cynicism about the product and the process.
I don't have a specific breakdown in terms of who's offloading and to what extent they're offloading the cognitive capacity.
I will say from an instructional standpoint that it is forcing teachers, professors, people who work within education to reframe some of the assignments and tasks that we do, ask students to do.
I know you spoke specifically about a presentation.
I'm thinking perhaps something even more rudimentary than that.
A teacher hands out a packet of worksheets, a kind of a standard assignment that might go alongside a book.
And while that type of assignment did serve a purpose at one point to ensure perhaps, that a student read the book to ensure that a student had done the homework.
When you look at the instructional rigor and the framework for an assignment like that, it's really more so designed to build compliance as opposed to perhaps a higher order level of thinking that you might develop alongside that.
Now, I'm not saying, of course, that you should use A.I.
for something like that, but I would advocate for the future of this conversation.
Moving us in a direction where we start asking about what is the ultimate goal of this instructional model that you're using.
And if you are a teacher who is creating assignments that are perhaps not A.I.
resistant at this point, what is the larger value that's underpinning that?
>> So do you change your assignments?
Do you change how you change how you assess and test?
>> I do, and I think teaching and assessment must change both because of A.I., but also needed to change before A.I.
came along as well.
>> Yeah.
well, let's let's get Pace's thoughts or Payce Chu-Lustig is a senior at Geneva High School, a teaching assistant.
And I want to ask you the same kind of questions.
Do you love English?
Do you love what you've been doing in school for four years?
>> Yeah, English is one.
>> Of my favorite subjects.
And I really like it because I think it's a valuable course that allows a space for students to cultivate discussion on whatever we're learning about and express different opinions.
>> You still reading books in English class like we used to?
Yeah, it's been a while since I've been in class.
what's something you read that you enjoyed recently?
>> Well, right now we're reading The Scarlet Letter.
>> Oh, Scarlet Letter.
Yeah, there we go.
It's a classic.
And, you know, so when when you are thinking about the the work that you're reading and you're taking a look at your assignments here, could you try to outsource some of that to A.I.
as it exists right now, or do you think your classes have found ways to make sure that they know if you're doing the work?
>> I mean, I feel like for, like, what Google is saying about, you know, different assignments with very basic like comprehension of what you're reading, you can definitely outsource that.
But at the end of the day, I'm not exactly sure if you're learning from that.
But when we're writing essays, there are programs where you can put the essay through and see, like if there if the program detects A.I., and then that is a way to prevent people from using A.I.
to essentially cheat.
>> I've been told that that doesn't work very well.
I've been told that those programs that are supposed to detect A.I.
are pretty gameable.
Yeah.
I mean, what do you think, George?
>> It is a little bit of a double edged sword with some of those programs.
In fact, I think there was a recent article that came out where they put the Constitution through one of those text trackers, and it came out as being 100% written by A.I.
So I think.
>> Man.
So, okay, so they're not perfect, but part of what pace is saying, I think is so important here, I would be truly, if I were 17, I'd be really worried if I were using A.I.
just to try to get the grades that I needed to go where I need to go.
Let's say you get what you want, you get the transcript that you want.
You get the grades that you want.
You get into the school that you want, but suddenly you go off to college and they have A.I.
proofed everything.
And now you have to think critically on your own and you've got to do work, but you haven't developed those skills.
I would be terrified that I was going to get exposed at a job, at a higher ed.
And it sounds to me like part of what you're saying is, you know, you're building the skills that you feel like you need to take into the world.
Is that fair?
Yeah.
So do you know what you want to do?
you know, next year and beyond?
>> Not really.
Maybe something sciencey, but we'll see.
>> Okay.
are you worried that your fellow classmates who would use A.I.
to cheat aren't going to have the skills that they need?
>> I mean, there's definitely that concern of the loss of critical thinking by using A.I.
personally, I kind of always keep in mind, like, if I'm going to use A.I., if I'm actually going to be learning it, and if I'm not going to be learning it, I probably won't use it.
But there's definitely that risk of my classmates not thinking about it like that and just not learning the material that their teachers want them to learn.
>> What do you think an A.I.
literacy class should include?
>> I think it should teach the students to recognize, like the benefits and the costs of using A.I., and specifically in school.
And if they have a job where they can use it that way, they know, like the I guess there's like the risks of using it.
>> do you guys have any friends who have named their ChatGPT?
>> Yeah.
>> Yes.
Have you named yours?
>> No.
>> Okay.
Pace.
>> Nope.
>> Okay.
But you know, people who have, right?
>> Yeah.
>> Oh.
>> I don't I don't know if you do, Vivian, but.
>> Okay.
No, I know people I know plenty of adults who do.
And that's an individual choice.
It's uncomfortable for me because it anthropomorphizes something that is not human.
And I think that takes us further down a road that feels uncomfortable.
I don't know if that comes up in A.I.
literacy.
There's a curveball for you, George.
>> It does certainly.
I mean, when you think about the way that these non-human actors, I think might be a more neutral term to to consider them with.
But the way in which when you do give it a name, you give it an identity.
And with that you almost assume a level of sentience that the technology does not have.
>> Exactly.
I mean, yesterday we talked about Tilly Norwood, who is not a real person.
It's an A.I.
creation.
But even news articles are referring to the creation as she with a pronoun that an A.I.
lab gave to this A.I.
It's an it to me.
but we're we're moving down this this slope pretty fast.
That tells us what we're going to treat this in an anthropomorphized sense.
It's going to be very common.
I think people are going to have A.I.
friends.
There will be A.I.
romantic relationships.
I hope nobody you know is in an A.I.
romantic.
Okay.
No, we're naming the A.I.
We're not.
I know it's coming, I get it, I'm trying not to be a Luddite.
And on the other side of this break, I want to ask George Goga to kind of take us through what this class will entail.
You've heard what Pace and Vivian say that students ought to be learning.
I think it's just so interesting that A.I.
literacy classes are are needed and they're going to become common.
But this might be the first of its kind in New York state.
we haven't seen a lot of them, that's for sure.
Geneva High School's class is going to launch in January.
The teacher behind it, George Goga, is with us, along with two students, Payce Chu-Lustig and Vivian Hoang, and we'll come right back on connections.
I'm Evan Dawson Wednesday on the next connections, we sit down for the first time with the new superintendent of the Rochester Central City School District, Dr.
Eric Jay Rosser joins us in our first hour taking our questions and yours.
Then in our second hour, we're talking about Lyme disease.
Understanding ticks the threat they pose and what we should know about it.
How to stay safe from it.
Talk with you on Wednesday.
>> Honest human stories.
That is what we do at NPR and we do it for you.
Keep listening.
>> Here on WXXI.
>> This is connections.
I'm Evan Dawson Chris in Geneva.
Writes to say Evan, for the second day in a row, you have the old guard dealing with new age that challenges their hegemony.
I'm very proud of my Geneva to be taking this on, but let's remember the context for many of our students.
The schools are not succeeding with their methods.
I have found Khan Academy and A.I.
to be extremely effective for our kids struggling with our methods of massing adolescents in large groups and largely in the sole hands of the teachers.
We have been able to hire.
Like the A.I.
discussion on the arts.
The proof will be in the data.
If we are truly open to the benefits, we don't quite yet understand.
That's from Chris and Geneva, and that's a wise point, George.
I mean, that's a humble point that says I might be use the word cynic.
And that's fair, though I would.
I would say that cynicism is an unhealthy, corrosive force, realism, even when it's dark, is important.
I try to be realistic, not cynical, although at times I get cynical.
Chris is saying there may be benefits we don't understand yet, and we can't just push us all away.
But are we teaching it the right way?
What's Khan Academy doing?
What are other sources doing?
So when you went to create this course, how much of it is your own material?
How much of it are you bringing together from other sources?
>> Roughly half of the course is inspired by my research and my development within different instructional models.
For people who have taught things like computer science that intersects with fields like artificial intelligence, and the other half of it comes from other institutions, other universities, for instance, Harvard University, which has different position papers on the field as well.
MIT, for instance, is part of this conversation, too.
So roughly half and half.
>> And I, I really appreciate Chris's point about having that humility.
Maybe in that sense, we've been talking to Vivian and Pace about, you know, the negative things that A.I.
can do.
Vivian, what's the what's a way that you think A.I.
could benefit the world in the future?
Are there ways that A.I.
that that you think about artificial intelligence?
Do you think the world's going to be better in the future in this specific way?
Is there anything that comes to mind.?
>> Not specifically, but I do think in the student sense, as I've said, that it's going to do amazing for critical thinking.
I know we talked a lot about how it's going to diminish that, but I truly think if the right people manipulate it in the right way it can really aid those discussions and aid those conversations.
>> So use it in the best way possible.
Level up.
Good things happening, good things possible.
Okay.
pace, are there ways that A.I.
in general, even outside of school, that you think the future will be better because of A.I.
in this way?
>> Yeah, I mean, we talked about this a little bit earlier, but A.I.
is pretty good at recognizing patterns within data, and that can be used to benefit us scientifically in whatever field.
>> But yeah, the pattern recognition is really interesting.
We talked one of our first conversations on A.I.
pattern recognition was in medicine.
And in the early models, the early use of A.I.
to detect cancerous tumors, A.I., the particular program that was doing it, concluded that any picture that has a ruler in it means that cancer is present in whatever it's looking at, because in the pictures that it was collecting and processing, the actual cancerous tumors were measured in size with rulers in a lot of them.
So the A.I.
thought, well, the the ruler must be connected to the cancer.
Well, that was kind of silly and people poked fun at it.
I probably poked fun at it, but they cleaned that up really fast.
And now what they're doing for medicine is pretty remarkable.
The data collection, looking at patterns, helping hopefully in predictive models, cancer detection, avoidance treatment.
I'm very optimistic in medicine.
I'm very optimistic there.
But the focus of the class is largely going to be on what students need to know.
In general, the tools, how they can use the tools, and the good and the bad.
Take us through how you want to build this curriculum.
>> Sure thing.
So as I put together the course, I had in mind a series of different objectives.
The first is purely historic.
I think it's important for students, but really for all of us to understand the foundation of artificial intelligence, not as something that showed up within the past couple of years, which is really where colloquially a lot of us are familiar with.
I'm also interested in students understanding how algorithmic thinking works.
We talk a lot about algorithmic bias.
For instance, in a very casual sense, we're aware that A.I.
does have bias.
It is not bias neutral and that it has the capacity to exacerbate certain biases because, as you said, Evan, it doesn't think it predicts.
And that's a huge domain that will consider as a class.
I'm also interested in students being able to apply A.I.
in practical scenarios, using it for things like immediate problem solving, whether that's very casual in what you're doing at home or perhaps in a more academic sense as well.
And then lastly, the last arm of the course will be to think about the ethics and the philosophy surrounding use cases.
When it is isn't appropriate to use.
But then perhaps even beyond that, brainstorming things like policy, what should, for instance, a democratic society's outlook on this powerful technology look like?
>> So when it comes to the impact on society, are your students worried that they're going to have fewer jobs because of A.I.?
Are kids talking about that.
>> Some?
Yes, largely.
My college students.
So at Geneseo, we do a lot with A.I.
within the context of writing, but we also talk about A.I.
within the context of the School of Business, the School of Education, and the ways in which artificial intelligence does have the capacity to, I call it, automate people out of their livelihoods in some situations.
So it's a very real concern.
I don't want to discount that in any way.
>> And getting better at automating people out in a wider range of fields than I probably would have thought possible even a few years ago.
Yeah.
So Vivian and Pace, do you think about your future?
Any concern there, Vivian, that wherever, wherever you land in adulthood, in terms of work, are you worried that A.I.
is going to challenge or maybe take away jobs?
>> Yeah.
For sure.
I don't think A.I.
is going to diminish anytime soon.
I don't feel like it's going to go away.
I especially now in society, I feel like it's only going to advance.
So obviously there are lots of concerns about different fields that I might want to go into that may get overtaken by A.I.
or have difficulties in that field.
>> Pace, what about you?
>> Yeah, I definitely agree with Vivian.
I have not started my job search yet.
but it'd be definitely interesting to see what happens to a lot of the different fields out there.
>> George also mentioned ethical implications, and that's probably way down the list.
Even on this show, we do a lot on A.I., obviously, and we probably haven't talked enough about environmental impact we have on things like Bitcoin, but not so much on A.I.
So what do you want students to know in your literacy class that's going to launch in January about environmental impact.
>> The environmental impacts cannot be understated.
They're really key to understanding the entire discourse.
And so I think when we think about the ethics and the philosophy surrounding this technology, why it exists, why it should or should not exist, one of the large appeals is how it impacts our Earth, how it impacts our natural resources and the Finger Lakes.
We talk a lot, of course, about the lakes and their capacity to participate in this dimension.
And in this conversation as well.
I know, Vivian, you mentioned earlier a little bit about the different ways in which pollution and certainly the capacity for water resources to be stranded as a result of A.I.
But I think that's part of this conversation from the beginning.
>> How much pace is you can't speak for your generation.
That's not fair.
How much are you concerned about environmental impact?
>> I well, personally, I want to probably do something in the science field.
And so environmental concern is very central to my identity.
And I know there is a lot of risk in using A.I.
with like the amount of resources it takes up.
And like you need real rare minerals and that often, often comes with unstable mining practices.
And also as somebody who lives in Geneva with the Bitcoin factory there, that.
>> Just down the lake.
>> Yeah, yeah.
so for me, I feel like it is something that I think about before I use A.I., and it is central to whether I use A.I.
in that situation or not.
>> Vivian, what about you?
>> I definitely agree with pace.
I think it's important to think about what is going to happen if I do use A.I.?
and I think that the situation now is bad, why make it worse >>?
>> a reader wants to know and has a book suggestion and wants to know if you're gonna have books or as part of the curriculum for this, is there any reading that you're going to have students do?
>> Yes.
Everything from white papers to nonfiction, popular nonfiction, and some fiction as well.
>> So this is the book.
This the suggestion?
Don't get too worried here.
It's an actual book.
It's called if anyone builds it, Everyone dies.
Are you familiar with it?
No.
Okay.
It's called if anyone builds it, everyone dies by superhuman A.I.
would kill us all.
And one of the coauthors, Eliezer Yudkowsky, has been an A.I.
researcher for years.
He has become probably the leading, I would call it, like on the spectrum.
He's the gloomiest voice.
So don't worry.
In studio here, I don't think we're all dying tomorrow.
we're all going to die someday.
Probably not tomorrow.
But probably that is a hyperbolic title of a book for a reason.
Everybody likes to sell books.
That's.
That's one thing.
But Eliezer Yudkowsky, who's never been on this program, and I would try to get him on, maybe we'll do that, maybe see if we can get him.
has is really concerned about the unintended consequences part of A.I.
So it's it's kind of the paperclip model that people talk about or the idea that A.I., because it's not a human, it's it is large language models are predictive, and even A.I.
intelligence, as we understand it, is not like human intelligence.
If you assign it tasks, it's not necessarily even programmed to think about the human consequences.
And so the long held idea is, well, if you tell A.I., it's got to make as many paperclips as possible, it'll eventually turn all of us into paperclips.
you know, a lot of people who work in A.I.
say, come on, there's guardrails.
That's silly.
Yudkowsky thinks that eventually some kind of jumping the guardrails problem will happen.
So in general, how do you talk to students and teach about guardrails and the darkest?
I'm not saying the gloomiest concerns the most hyperbolic, but the realistic concerns that we don't always know what A.I.
is going to do.
We can't always explain it, but there are real risks to that.
>> And the risks, as you mentioned, can be unintended, to say the least.
At this point.
I think the way you lead with that is complete transparency.
You know, A.I.
and specifically some of the models on which it's trained and some of the algorithms that we talk about so casually don't have a level of transparency that as humans, we enjoy when relating to one another, when engaging in a different discourse, a field that's been part of our conversation for a much longer time.
I think transparency is the first step to thinking about those guardrails as well.
And the second thing to say there is that it's not an individual struggle.
It should not be framed as an individual battle.
I'm really looking to everything from regulatory frameworks produced both by these companies, but also by think tanks who operate outside the purview of some of these companies to also offer guidance and to come to the table and to think about how different stakeholders participate in this conversation and acknowledge, at the end of the day, that perhaps when we participate in the conversation, we're both criticizing and condoning some of the practice.
I think it's it's much more complex than the one sided dimension that it often takes.
>> I appreciate how you're approaching some of these issues, George, because it doesn't strike me as a class that is quote, unquote pro A.I.
or even anti A.I.
It is looking to teach the reality while living with the reality that even if we want to resist it, it will be here and students will graduate into a world in which it is, as you've said, saturated with A.I.
So it's sort of neutral on the existence of A.I.
while looking at each issue within A.I.
individually in its context.
Is that fair?
>> Yes.
>> Okay.
I'm I'm going to do the PR for you then.
There you go.
he needs no PR the the district Geneva Central School District is launching this class in January, and I think we're going to see a lot more districts do it.
I wouldn't be surprised if colleagues in other districts call you and are talking about this.
Sam writes in to say, Evan an A.I.
literacy course should be grappling with some of the issues that your show yesterday talked about, which is who is creating art and how much do we care whether or not that art is created by a human?
It goes on to say humans are going to be out of work.
So Sam, but Sam is talking about that should be part of an A.I.
literacy class.
So again, yesterday we talked Tilly Norwood, but we've talked about music.
We're going to talk about theater soon.
There was a very interesting set of plays as part of the Fringe Festival last month in Rochester, in which you're smiling over there.
Are you aware?
Familiar.
Okay, so you know, this idea that you're going to see three shows and one was written by A.I., and can you pick it out of the lineup?
And it's very, very interesting results there.
But part of what Sam is saying is that we should be teaching this, that we should be talking about even if we like this song, if it's purely A.I.
created, if no humans were involved, should we listen to it?
Should we consume it?
is that going to be part of the course?
>> 100%.
So part of the ethical conversation that we have within the course extends beyond writing and talks about the performing arts, talks about the visual arts.
You know, we were talking today about visual media specifically that we see on social media.
That's so indistinguishable from media that's made by humans.
And so I think that has to be part of the conversation.
If we're going to prepare students to think ethically about these questions.
>> So there's no wrong answers here.
I want to ask the two students, again, if you're just joining us, Payce Chu-Lustig Vivian Hoang are here.
They're seniors at Geneva High School, held in very high regard by their teacher.
That's why they're here.
they're they're on a great track in their own academic careers.
And we're talking about how they view artificial intelligence, how it affects possible learning.
assessment, et cetera.
And when it comes to Sam's email about art.
So let me just take the idea of a book.
So take a book that's going to be 80,000 words.
That's 230 to 250 pages of a traditional book.
If I, the author, write a 200 word prompt for ChatGPT and that's all I write, and I say, give me the book.
And ChatGPT spits out the book and it's 80,000 words.
Pace.
Is that my work?
Is that my book?
>> In my opinion, no, but I think it does vary.
Like within from person to person about what they think.
>> No, I'm with you.
So 200 word prompt produces an 80,000 word book.
Is that my book, Vivian?
>> No, I don't think so either.
like pace says, I think it really does differ on the person.
>> Let's say I've written 10,000 words, but I've got writer's block and I just don't know where to go next.
And I put all 10,000 words into ChatGPT and I say, here are the parameters of where I want to go with this.
Here's why I'm struggling.
Finish it for me and it produces another 70,000 words to create.
The finished product.
Is that my book, Vivian?
No, still not okay.
Pace.
>> No.
>> I've got 60,000 words done.
I just can't get to the finish line.
I need a few more chapters.
And I put the 60,000 words in, and it produces another 15 to 20,000.
And the book is done.
Is that my book?
Pace?
>> No.
>> Still.
No.
Yeah.
Okay.
Vivian.
>> I don't think so.
>> Okay.
I've got 75,000 words done, and I need two more chapters.
I'm in the very end here.
I just need a few tweaks.
A.I.
helps me get to the end, edits a few things, adds a few thousand words.
I wrote the first 75,000.
Vivian.
Is it mine?
>> No.
>> Still no.
Okay, pace.
>> No.
>> Oh, interesting.
Those are very interesting answers.
So the reason I ask in that way is you could apply this to songs, right?
So I've written one verse and one chorus of the song and I've had it for years.
I could pick up a guitar and I could do it, but I've never finished it.
I put it into Suno, which is a very popular program.
Suno finishes it for me.
Is that my song, Vivian?
>> No.
>> Okay.
Pace.
>> No.
>> Okay, so they're gonna.
George.
They're hard judges here they are.
A lot of people would not be as harsh as them.
A lot of people would draw the line in different places.
The reason I ask is I think this has to be part of literacy and cultural conversations, because it's not going to be so simple as, oh, that's an A.I.
song, versus that's a human song, or that's I like Jimmy Highsmith's term.
That's a synthetic song versus a non synthetic.
It's going to be hybridized and we're going to have to draw the lines that we want.
But I think we're going to have to demand transparency in how we assess it.
>> Yes.
Well, and to that point, I think even the term plagiarism is going to evolve over the next ten, 20 years as we think about the depth of the field, I think that to the examples that you gave, a lot of them can be distilled to questions of authorship.
And so now maybe the question changes from being a question about plagiarism to a question about authorship and the integrity of authorship versus co-authorship and what that might look like.
>> Yeah.
And co-authorship is weird because to me, in my brain, it anthropomorphizes.
>> Yes.
>> It does, because when you do have a co-writer of a song, a coauthor we tend to think in a human context, and that involves collaboration.
You can collaborate with A.I., or you can let it just do it.
And as a reader, I wouldn't know if you were like, oh, I took this idea and then I remolded it, then I put it back in, then it gave me more ideas and it remolded it versus like, I hit a wall.
I put it in GPT it, finished it for me, done.
Get it out the door.
I, I think we're going to really struggle to discern where the human creation of art is and where the line is, and I'm not looking forward to that part of the future.
>> I think we're going to struggle as well.
I think it is going to involve and invite some different stakeholders to the conversation that maybe haven't always been part of the conversation.
So I share your perhaps hesitation, but I also welcome and want to see where it goes from here.
>> Yeah.
would you listen to music, Payson?
Vivian, that's purely A.I.
created.
>> I mean, I feel like you can always listen to it, but maybe my opinion would be a bit different.
>> Would you want to seek it out?
Would you be less likely to seek it out if you knew it was just A.I.
generated?
>> I think I would be curious, but I think at the end of the day, I probably would prefer to listen to music based on real people who made it and their experiences.
>> But Vivian, what if it's a total bop?
>> I think there's totally curiosity there.
I know there's been this huge trend lately just on social media of A.I.
generated music, which is interesting.
but I think the reason we enjoy music so much is just that connection with the artist itself.
themselves.
and I think that deep emotional connection that you may not be able to get, if it was A.I.
generated.
>> Yeah.
I think there's a lot of wisdom in that answer.
So as we get ready to wrap here, we've been talking to George Goga and a couple of students from Geneva High School, where in January they will launch the first A.I.
literacy program there.
And it might be the first in New York State.
You're certainly going to see more pop up.
And a lot of places before too long.
It will be very, very common.
How do you define the goal here, George, for your students when they go through this course?
You're the teacher.
The goal for your student is to emerge from this course with what.
>> An understanding of A.I.
as one of the three metaphors we can as a civilization, think of A.I.
as a mirror.
It reflects back our anxieties, our concerns, the things that we see in ourselves.
We can think of A.I.
as a hammer.
We can use it in unethical ways, in ways that outsource our thinking, or we can use it as a telescope.
It can enable us to go one step further to perhaps dig into a conversation that we always, we haven't always been having.
And so the ultimate goal of the class, I think, is to establish ways for students to think within those metaphors and develop the competencies for using this well into the future.
>> Are you an optimist with A.I.?
>> I'm a realist with A.I.
>> There you go.
So you're not a.
>> Cynical optimist, not a cynic.
>> But a realist.
Yes.
And that means looking at the rough edges and some of the the bad possible outcomes.
>> It absolutely does.
>> And helping students see that too.
But in a way that doesn't.
I didn't even want to read Alex's email on the air because I didn't want two wonderful 17-year-old students to panic.
The title of this book that Alex sent in.
But, you know, there's a wide range of possibilities here.
And I'll close with this.
How do you help students see that possible future and not panic about how dark it may be, while being realistic about where we're going?
>> I think at the moment it's a vocabulary.
It's a literacy like any other.
It's about getting students to see their own position within this technology and to ultimately ask some of these salient questions about the ethics, about the philosophy behind it, and even about the art behind it, and to ultimately ask themselves the question, where do they stand within this conversation?
And where should we as a society stand within the conversation, too?
>> You want to come back next fall and tell us how the first year went?
>> We'd love to.
>> All right.
That's an invitation.
Thank you.
Please come back.
And my prediction is when we come back a year from today, there's going to be a bunch more literacy courses out there that are popping up because that's where we're going.
George Goga teacher at Geneva High School doing great work there.
Thanks for coming in.
>> Thanks for having us, Evan.
>> And I want to thank the students.
Payce Chu-Lustig Vivian Hoang, thanks for sharing your stories.
Good luck to you.
Although I don't think you need luck, I think you're in great shape.
But thank you for for joining the program today.
>> Thank you.
>> Thank you.
So sorry to pull you out of school.
and from the whole team at connections.
Thank you for listening.
Thank you for watching.
We're back with you tomorrow on member supported public media.
>> This program is a production of WXXI Public Radio.
The views expressed do not necessarily represent those of this station.
Its staff, management or underwriters.
The broadcast is meant for the private use of our audience.
Any rebroadcast or use in another medium without expressed written consent of WXXI is strictly prohibited.
Connections with Evan Dawson is available as a podcast.
Just click on the link at wxxinews.org.
- News and Public Affairs
Top journalists deliver compelling original analysis of the hour's headlines.
- News and Public Affairs
FRONTLINE is investigative journalism that questions, explains and changes our world.
Support for PBS provided by:
Connections with Evan Dawson is a local public television program presented by WXXI