
Story in the Public Square 8/31/2025
Season 18 Episode 9 | 25m 15sVideo has Closed Captions
Examining the impacts of AI on children.
Artificial Intelligence is changing all of our lives, and the biggest changes are yet to come. Yet despite the revolution on our doorstep, few have looked carefully at the impact of AI on children. Dr. Mhairi Aitken has done just that and has evidence-based advice for policymakers and developers.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Story in the Public Square is a local public television program presented by Rhode Island PBS

Story in the Public Square 8/31/2025
Season 18 Episode 9 | 25m 15sVideo has Closed Captions
Artificial Intelligence is changing all of our lives, and the biggest changes are yet to come. Yet despite the revolution on our doorstep, few have looked carefully at the impact of AI on children. Dr. Mhairi Aitken has done just that and has evidence-based advice for policymakers and developers.
Problems playing video? | Closed Captioning Feedback
How to Watch Story in the Public Square
Story in the Public Square is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship- Artificial intelligence is changing all of our lives, and the biggest changes are yet to come.
Yet, despite the revolution on our doorstep, few have looked carefully at the impact of AI on children.
Today's guest has done just that and has evidence-based advice for policymakers and developers.
She's Mhairi Aitken, this week on "Story in the Public Square."
(bright uplifting music) (bright uplifting music continues) (bright gentle music) Hello, and welcome to "Story in the Public Square," where storytelling meets public affairs.
I'm Jim Ludes from the Pell Center at Salve Regina University.
- And I'm G. Wayne Miller, also at Salve Pell Center.
- And our guest this week is Mhairi Aitken, a senior ethics fellow in the Public Policy Program at The Alan Turing Institute in London.
She's joining us today from the United Kingdom.
Mhairi, thank you so much for being with us.
- Hi.
It's great to be here.
- So you've been involved in a study, and you've helped lead it and contribute to the study at The Alan Turing Institute, "Understanding the Impacts of Generative AI Use on Children."
This research was focused on eight to 12-year-olds in the United Kingdom.
How much and what kind of experience do they actually have with AI?
- Yeah, I mean, so at The Alan Turing Institute, I have the great privilege of leading a team, a program of work around the topic of children and AI.
And we've been working in this space for about four years now.
And it's been really, we've really found that, over those four years, children's experiences with AI have changed and developed quite a bit over that time.
So children of all ages are already kind of very regularly interacting with AI on a daily basis.
That includes, you know, infants and preschool children who might be interacting with kind of smart devices or smart toys, including like smart teddy bears, smart dolls that will personalize and interact as the child plays with it, you know, through to older children interacting on video streaming platforms where AI will personalize and filter content, and up to, you know, teenagers and young people engaging on social media, and many other ways where AI has an increasingly big role in the information and the access about the world, as well as in kind of mediating interactions online.
But with generative AI, we're seeing lots of new developments in ways that children and young people are gonna interact, interacting very directly with AI technologies and using AI technologies in lots of different ways.
In the study that you mentioned, and this was a research project that was supported by the LEGO Group, and in this study we, as I say, we were focused on children between the ages of eight and 12.
One part of the study was a survey.
We surveyed around 800 children across the UK.
And in that survey, we found that around a quarter of children in that age group between eight and 12 reported using generative AI technologies, and the majority of them reported using generative AI at least on a monthly basis.
So they were using these tools regularly.
Within that age group, we did find differences in the purposes for which they were using those tools, what they were doing with those tools.
With younger kids, so kind of eight, nine-year-olds, more likely to say they were using those tools for fun, for games, for kind of playing around, such as like creating images, whereas the kind of 11, 12-year-olds were more likely to be using that in schoolwork, or to find out information, or to support them in their learning.
And so, I think if we continue to expand that age range, we might find a similar kind of trend as we begin to look at older children and teenagers who maybe are using those tools more in kind of learning and educational purposes.
- So are you seeing this in other countries outside the UK?
- Well, so I mean, our work has really been focused in the UK context.
So I think there's a growing body of evidence looking at experiencing regenerative AI in different contexts, and I think we're seeing similar trends.
I don't want to speculate too much, my research has been focused in the UK, but I think it's likely that many of these kinds of trends are going to be relevant or are going to be found in other contexts as well.
I'm really interested, certainly, for kind of future research to think about what the international comparisons might be and to what extent there are similar experiences in different regional contexts, also cultural contexts, and, you know, to really explore what those differences might be.
- Can you give us a little more detail about what children in the UK are doing with generative AI?
Give us some examples.
You mentioned a few before.
Get into that in a little more detail, because I'm sure our audience would love to know that 'cause it's happening here, too, of course.
- Yeah, I mean, so it's interesting.
We find some really interesting kind of distinctions or differences within the children that we engaged with.
One thing in particular, we found that children with additional learning needs were significantly more likely to report using generative AI for communication and connection, and also for seeking advice on kind of more personal issues, significantly more likely to use it for those kinds of purposes, compared to children without additional support needs or additional learning needs.
I think that that's really something we really need to explore further and explore what the impacts of that might be.
We also find that, you know, children...
When we speak to teachers and parents, there's often a lot of concern around the ways that children and young people might be overlying on these tools in education, for example, for creating or handing in work, claiming it's their own, which is actually AI-generated.
But when we speak to children and young people about the ways they're using these tools, often what they tell us is that, like, yes, they will use them to support their learning or their education, but most often, they report using that as a way of kind of finding out information, looking for information, and then integrating that into their work and into their learning.
Of course, there are instances where children and young people may well be using that to kind of create outputs that they might hand in as homework, but generally what we find is that children really want an opportunity to learn more about these tools and to understand, you know, what are the limits on how they should be using them, how can they use them well to support their learning, but also, what are the things they shouldn't do?
And actually, I think that's where, at the moment, there's a bit of a gap.
We need more resources, we need more education focused on equipping young people with the skills and understanding to support them to make good choices about how to use these tools well without using them in ways that are inappropriate or kind of overlying on those tools.
- Yeah, so Mhairi, one of the things that struck me about the study was that you actually interviewed children.
There were actually children, there are students participating in the study.
Why haven't those perspectives, why haven't those voices been more prevalent in the development of these platforms from the beginning?
- Yeah, and that's a really, really important component of the work that we do at The Alan Turing Institute.
For the last four years, we've been collaborating with an amazing charity in Scotland called Children's Parliament.
And Children's Parliament are real experts in kind of children's participation, particularly focusing on children's rights across a wide range of different policy areas.
And so, in this study, we were working in primary schools in Scotland.
And we ran six full days of workshops across two schools in Scotland, engaging children between the ages of nine and 11, to explore their experiences with generative AI.
And in these workshops, they had opportunities to engage with generative AI tools, not directly, because, actually, one thing that's really important to note is that, currently, you know, there's no generative AI tool that can be guaranteed safe for children to use.
So in this study, it was really important that children didn't have direct access to these tools, but they had opportunities to interact with them through giving prompts to members of the research team.
The research team then screened the outputs, and then they got to see what the outputs of those tools were.
And through this process, the children got to learn how generative AI works, how it might be used, but also about its limitations and its impacts.
And they got to, you know, really explore how they felt about the ways that generative AI could and should be used, and also what they wanted developers and policy makers to know about what matters to children in thinking about generative AI.
And I think that's really important because, as you say, you know, children's voices are all too often overlooked.
You know, children are probably the group who will be most impacted by advances in AI technologies, but they're also the group that are least represented in decision making about how those technologies are developed and also in decision making around policy and regulation.
But we know that children use these tools in very different ways from how adults use them.
They'll often use digital technologies in ways that are very different from how the designers or developers of those tools anticipated that those tools might be used.
So if we don't directly engage with children and young people to understand their actual experiences, to understand how do they actually use these tools, how do they actually interact with them, then there's a risk that we miss opportunities to think about how we could use those technologies really well and maximize the benefits, but we also maybe miss opportunities to fully understand what the actual impacts of those technologies are or the actual risks of those technologies are.
So I think it's really, really important that we, you know, to inform future innovation practices, but also to inform future policy, we need to engage directly with children and young people and to understand their real experiences, their real needs, their real concerns, and to give them a voice in shaping those processes.
- So your research was not confined just to children.
You interviewed parents, caregivers, teachers.
What were their assessments of AI, and how did they compare to children's?
I think this is a critical piece of your research.
- Yeah, absolutely.
So the survey component, surveyed, as I say, around 800 children between the ages of eight and 12, but also their parents and carers, and it also surveyed 1,000 teachers across the UK.
And these are teachers who teach children and young people between the ages of one and 16.
And there are some notable differences.
So, for example, in the survey of teachers, we found that three out of five teachers reported that they use generative AI to support their teaching.
And often, that's in tasks like lesson planning or in kind of the administrative tasks to support teaching.
But we also found teachers were very positive, reported being very positive about the opportunities of using generative AI themselves to support in education, and very positive about the ways that that could support teaching and support education, but at the same time, were really concerned about the ways that children might use those tools.
And teachers had big concerns around impacts on critical thinking, also impacts on kind of diversity of ideas, and that, through increasing use of generative AI, children were displaying less diverse ideas, and also concerns around plagiarism and that children may hand in work, submit work that was AI-generated and pass it off as their own.
I think it's really interesting to see that slight tension in that teachers are really positive about their own use of generative AI and see it as having value within educational context, but really concerned about children's use of those tools.
And I think that's where, again, we need more communication there to understand, you know, what are the opportunities, and how might teachers and educators support children to understand the limitations, to make sure that they're using those tools responsibly and safely.
- Did you- (indistinct) - I'm sorry.
- Go ahead.
Go ahead, Mhairi.
- On the survey of parents, we found quite different concerns.
So, with parents, the concerns tended to be more around that children using generative AI might access inappropriate content or inaccurate content.
It was less about, we found less concern from parents about the ways that children might potentially use AI in schoolwork or in homework, and much more concern around access inappropriate and inaccurate content.
And this is something that is, you know, these are well-founded concerns in the workshops that we ran, where, as I say, children never directly accessed generative AI tools, but we found very often that children would give us very innocent prompts, you know, ask for a prompt to create an image that was very, very innocent.
And actually, the generative AI model would produce an image that was highly inappropriate, often producing kind of sexualized or somewhat violent images that we couldn't show to children and young people.
But this shows very clearly that we know that children as young as eight are regularly accessing these tools, and that brings real risks that they might be exposed to inappropriate or potentially harmful content through these models that are really not safe for children to use.
- Did you find any children who were not interested in AI?
- Maybe I might just tweak that question a little bit.
Is there a difference, is there any sort of difference across socioeconomic status?
- Yeah, yeah, a really important question.
We found some really significant differences.
In the survey, we found that, as I say, we found about a quarter of children reported using generative AI, but we found that children who attended private schools were significantly more likely to report using generative AI, also significantly more likely to report having information about generative AI or having previously heard about generative AI, so having some kind of understanding of what generative AI is, compared to children at state-funded schools.
And that raises real concerns about inequity in access to technologies, but also inequity maybe in access to learning about technologies and the opportunities, how things are being, you know, opportunities to learn about these technologies.
And as these tools are increasingly being used for educational purposes, you know, and as there's lots of interest in the ways that generative AI might be used to support learning, particularly for students with additional learning needs, if these tools are, if these tools are potentially exacerbating those inequities in access to learning tools and access to education between private schools and state-funded schools within the UK context, I think that's an area that we really need to pay particular attention to on what the impacts that might have on equity in education.
But kind of to the first point in there around children's interest in AI, I think that's something did surprise us in the workshops that we ran.
So we had, you know, obviously, as I said, we had six days of full workshops in primary schools in Scotland.
And when we went into these workshops, we were aware we were bringing in generative AI technologies.
We had, you know, full days of creative tasks where the children would have opportunities to choose what tools they wanted to use.
And they could choose if they wanted to use generative AI or if they wanted to use kind of more traditional art materials, like paint, or collaging, or plasticine, or all kinds of different art materials.
And when we were planning these workshops, we fully expected that the kids were gonna be so excited about generative AI technologies that we would have to have a queuing system, you know, we would have to have a limited number of laptops.
And we thought that we were gonna have to really manage the numbers for kids coming to use generative AI.
We were completely wrong.
We went into these schools, and when the kids saw the art materials, the traditional, you know, hands-on art materials, they were so excited, and they much preferred using paint, plasticine, you know, things they could get their hands in, get dirty physically, you know, make art.
They were much more excited about that than sitting down with a researcher at a computer and giving a prompt for a generative AI model.
- What do you think that means?
That seems profound.
- It does.
- I mean, I think it means a few things.
I think there's something quite beautiful about, you know, children want to get their hands dirty.
They want to physically make something.
It also says something, I think, about the level of resourcing in arts and creative activities within schools.
We brought in quite a lot of art materials, and certainly some of the teachers commented that the amount of art materials we brought in was, essentially, what they would normally have as a year of supply in their school.
I think that that is also quite telling.
But it also, I think it's a really important reminder that we often, when we talk about digital technologies and children, or how children use digital technologies, often the focus is purely on the technology, you know, and focusing on like, are children going to become dependent on digital technologies?
Are they over reliant on digital technologies?
Are they using these technologies too much?
And actually, it's really important to remember that the choices that children make about how they use these technologies, for what purposes, those choices are always in the context of the environment in which they have access to those technologies.
They're also choices that are dependent on what the alternatives available to children are.
And in this context, you know, having access to traditional art materials, having access to things that they could physically make and do, was much more appealing to the children.
And I think that that's really important.
And it's really important when we think about how we facilitate children's interactions with technologies and ensuring that they have a range of options available to them.
- The children that gravitated toward arts, they were also chatty.
That became a social experience, as opposed to in front of a computer, which is really not a social experience.
Talk about that a little bit more.
'Cause that seems like a profound, it seems to speak to human nature, really.
- Yeah, that was really interesting.
One thing we were really interested in exploring was how access to digital technologies affected creative processes, how it affected relationships and dynamics within the class and among children.
And, yeah, we did find that, when children did use generative AI, they would tend to come individually up to a researcher to ask to put a prompt into generative AI model.
It was quite a solitary process.
Whereas, when children were using art materials, you know, they would be in a messy area.
They'd be sitting with their friends or standing with their friends, they'd be chatting while they're painting, while they're creating things.
It was a different process.
I mean, it wasn't 100% like that.
There were definitely many moments where kids would come in a group, and they would, you know, there were many times where kids really enjoyed being silly with generative AI, you know, like seeing how ridiculous an image they could get it to make, see how far they could take it.
And that created a lot of hilarity.
But those tended to be like particular moments, and then they would go back to their art stations and, you know, laugh about it and create something.
And I think there is something important there.
In the workshops, this was, you know, it was quite a controlled environment.
We had a lot of safeguarding processes in place to make sure that children weren't being exposed to something inappropriate or harmful, and that meant that it was a mediated, it was a facilitated process.
Of course, we know that when children are actually interacting with generative AI at home, they're much more likely going to be doing that in a solitary process.
Much more likely, it's going to be a child on their own, on a device at home.
And that raises different concerns.
And that's something that we really need to think about, particularly because we know that these tools are not designed with children in mind.
You know, these are not tools that have been designed for children, but we know that children are accessing them and using them.
And that really does bring some big risks around impacts on children's wellbeing, as well as the risks of children being exposed to harmful or inappropriate content.
- Yeah, Mhairi, we've got about another hour's worth of questions, and we've only got about four minutes left in the show.
I want to note that you use the Responsible Innovation and Technology for Children Framework, which is developed by UNICEF with the folks at the LEGO Group and the LEGO Foundation.
But in addition to the dimensions of that framework, students also identified a range of concerns that they had about AI, everything from environmental impacts to trust in terms of whether or not they could discern what was real and what was AI, and whether or not it would help them actually learn.
And so, as I read this, I found this to be, frankly, I was amazed that these were eight to 12-year-olds that were producing this kind of feedback.
And I'm wondering, who's listening?
Policy makers, developers?
I know that's part of the work that you're doing.
Who's listening to these set of recommendations?
- Well, we're hoping to make lots of people listen, and that's really an aim of the research.
Obviously, we were delighted that this research was supported by the LEGO Group, and we're working with the LEGO Group to make sure that this reaches industry voices, as well as policy voices, so to influence industry and policy practice.
And that's so important.
Just a couple of weeks ago, I was at the UN Internet Governance Forum, where we presented this research on a panel at the Internet Governance Forum.
And we're doing lots of kind of policy outreach work to make sure that this can inform policy and inform regulatory processes.
But also, you know, we produce recommendations that are aimed at industry as well, to make sure that this can inform industry practice.
Because, yeah, it's urgently needed.
You know, these technologies are being increasingly used by children and young people of all ages, but their interests, all too often, are not part of the design process or part of the development process.
And that needs to change.
That needs to change.
And that's the aim of this research is really to inform that and to make sure that children's voices and children's interests are part of the decision-making process.
- So your bio says that you are, quote, "Passionate about finding creative ways of engaging members of the public in discussion around the roles of data and AI in society," close quote, including as a regular performer at the Cabaret of Dangerous Ideas at the Edinburgh Festival Fringe, that sounds great, and in comedy clubs.
Question: Is it hard to make people laugh about AI?
I'm laughing asking you the question.
(Jim laughs) Because I can see you on a stage, and I know you would make me laugh.
But is it hard to make people laugh about AI?
- No.
Surprisingly, no.
I mean, I think- (hosts laughing) People are surprised when you get up on a stage in a comedy club, and they're surprised that AI is what you want to talk about.
(participants laugh) AI is something that's impacting every aspect of our lives.
You know, we've been talking a lot about children, but, you know, adults as well.
Everything from how we date and how we make romantic connections, you know, through to decisions about our healthcare, or our housing, criminal justice plea, everything.
That doesn't sound particularly funny, does it?
(hosts laugh) Yeah, you can make a lot of jokes about it because it is something that is so much entwined, so much kind of integrated into all parts of our lives now.
- Are we gonna see you on "Saturday Night Live"?
(participants chuckle) - Let's see.
(chuckles) - Yeah, we'll see.
- Hey, so Mhairi, we've literally got about 45 seconds left here.
Are there some top-line recommendations for policy makers and developers that you want to leave our audience with?
- Yeah, well, I guess a top-line message is that we need to avoid making assumptions about how AI impacts children, stop starting from an adult set of assumptions.
No adult today has experienced growing up in a world with generative AI.
No adult today has experienced being a child with access to generative AI technologies.
So, to get this right, to know what it means to develop and use these technologies responsibly, we have to listen to the real experts.
And the experts in children's experiences with AI are children themselves.
And that's why we absolutely need to have children's voices and children themselves as part of these decision-making processes.
- It's hugely important work.
Mhairi, thank you so much for spending some time with us.
She's with The Alan Turing Institute in London.
That is all the time we have this week, but if you want to know more about "Story in the Public Square," you can find us at salve.edu/pell-center, where you can always catch up on previous episodes.
For G. Wayne Miller, I'm Jim Ludes, asking you to join us again next time for more "Story in the Public Square."
(bright music) (bright music continues) (bright music continues) (cheerful music)
- News and Public Affairs
Top journalists deliver compelling original analysis of the hour's headlines.
- News and Public Affairs
FRONTLINE is investigative journalism that questions, explains and changes our world.
Support for PBS provided by:
Story in the Public Square is a local public television program presented by Rhode Island PBS