
How New York Plans to Tackle AI Image Manipulation
Clip: Season 2024 Episode 51 | 14m 17sVideo has Closed Captions
Assembly Member Alex Bores addresses the dangers of AI & proposal for labeling manipulated images.
Artificial intelligence is transforming media, but it also poses new risks. Assembly Member Alex Bores (D-Murray Hill) discusses his proposed bills to regulate AI-manipulated images, focusing on the need for transparency and trust in digital content.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
New York NOW is a local public television program presented by WMHT
Support for New York NOW is provided by WNET/Thirteen.

How New York Plans to Tackle AI Image Manipulation
Clip: Season 2024 Episode 51 | 14m 17sVideo has Closed Captions
Artificial intelligence is transforming media, but it also poses new risks. Assembly Member Alex Bores (D-Murray Hill) discusses his proposed bills to regulate AI-manipulated images, focusing on the need for transparency and trust in digital content.
Problems playing video? | Closed Captioning Feedback
How to Watch New York NOW
New York NOW is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipThe rise of artificial intelligence has prompted some elected officials to look at ways to combat its potential dangers.
This year, state budget established Empire AI, which governor Kathy Hochul says will secure New York's place at the forefront of AI research.
It also included legislation requiring disclosures when artificial intelligence is being used to generate content.
But some lawmakers and advocates argue that there needs to be even more protections.
A few months ago, lawmakers in the assembly held a public hearing focused on consumer protection and safety when it comes to AI.
Assembly member Alex Bores plans to introduce a slew of bills next year aims at requiring labels on images online so users know when content has been manipulated.
For more on those priorities, David Lombardo of the Capitol Press room spoke with the assembly member.
Here's that conversation.
- Well, thanks so much for making the time Assembly member.
I really appreciate it.
- Thanks for having me.
- So in September, you were part of an assembly hearing that examined how to protect consumers from artificial intelligence in the future.
And it seemed like the first step is making the public aware of when AI is being used to manipulate or generate content, which was actually the subject of legislation adopted in 2024.
But moving forward, what other steps can the state mandate to help promote awareness?
- This technology is developing really quickly and so the actions that the state is gonna have to take will also develop really quickly.
But we definitely want to do more in encouraging people to know when AI is being used and when it's not being used, so that we know that we can actually trust the images or the information that we're getting.
We want to know more about the training sets that are being used for many of these models.
We wanna ensure that consumers are protected and that people are taking due care when they're putting out new algorithms and we want to protect from some of the biggest potential threats that might come from really advanced frontier models.
- So the legislation that was the focus this year is about sort of requiring a disclosure, but what about requiring some sort of disclosure or fingerprints on the content that's being generated to ensure that at least tech savvy people can see if something was manipulated at all?
- I think that's a great idea and I daresay we should do a few bills on it.
To detect what is false will always be a cat and mouse game because the image generators will get better and better.
A more reliable way of approaching this problem, at least from a technical standpoint, is to rely on proving what is true.
And so there is a open source standard that many an industry have come together to create called C2PA, which can label any image or video with how it was created and any of the edits or manipulations that occurred throughout its development.
And so that works for both AI generated content and real content.
If we can get to a place where 90 to 95% of images or video have this metadata tag, then everyone will just know, okay, if it's not there, I shouldn't trust it.
And that's a much more reliable way to build trust in systems.
So I'm actually introducing whether it's one bill or four bills, four initiatives in order to encourage the use of C2PA, which would either mandate retaining the information or producing the information if you are an AI image generator and all of the big ones already produce this information, if you're a social media company, you'd have to retain it if users upload it.
If you're government, you'd have to retain it and also produce it wherever practicable.
And if you are a statewide campaign, those with truly the resources to do so would have to ensure that every image, whether it's AI generated or not, has this tag starting in the 2026 to 2030 cycle.
- Well, yeah, it's interesting you mention the mandates that you're looking to impose and the different players that you want to impose this on.
And because basically these tools can be used by anyone to create or manipulate content, but there really are only a few outlets that can be used for the mass distribution of that content.
And I'm thinking like you mentioned, social media sites as well as traditional broadcasting platforms.
So is it fair to mandate this type of disclosure on these platforms or is there a reason to think that they shouldn't necessarily have to be responsible for the content on their sites?
- This isn't an outside mandate.
This is encouraging a standard that industry itself has come up with.
And actually at the oversight hearing that you referenced at the start of this, Tech:NYC, which represents most of the social media companies, actually asked for an encouragement that social media companies be required to preserve this information.
If we can design it in a way where it's clear and easy to comply with, we can actually build more trust in everything we see online, which only benefits these systems in the long run.
So you can think of it not as, you know, governments saying, oh you have to do this newfangled thing, but recognizing that industry is coming up with some of their own solutions and we are cheerleading that and encouraging compliance so that we can all have a more trustworthy experience.
- Well it's interesting to hear you talk about the high tech industry trying to come up with its own ideas in this area because tech companies traditionally balk at any sort of government mandates and we're seeing that play out right now with some efforts to regulate the use of social media tools by young people.
So how much deference are you prepared to give some of these platforms as opposed to telling them no, this is how you're going to go about doing this?
- The question isn't on deference, it's on what's the best way to get it done.
And if the comments that we're getting back from industry are, we don't want to do anything here, well we're not gonna give that any deference.
If it's, it would be very difficult to comply with this, it'd be very difficult technically to build this into the system.
But if we went about it this other technical way, we could achieve all the goals we want.
That's really useful feedback.
So yeah, as you know, I come from the tech industry, I have written code myself.
I wanna make sure that the laws we write can actually be implemented.
So on that level, I'm gonna be listening quite closely, but I think we all share the goals of knowing that when we see an image, we know whether that's true or false.
- Well, we've been talking about regulating the end product of artificial intelligence, but what about regulating how it's actually used?
Is that something the state government can wrap its arms around too?
- Absolutely.
And I think there's gonna be two broad approaches to that.
One will be specific instances of using AI.
So you've already seen many bills proposed, some passed by my colleagues on when AI is used for an employment decision, for hiring or firing, regulating if it's used to, if you're signing away your rights to be recreated as an actor or a performer, then you need to be represented by a union or a lawyer in that negotiation.
You've seen bills about the broad disclosure if it's used as a chat bot, and these are all my colleagues initiatives that I think are are great steps forward, you're also gonna see a set of bills around broadly regulating AI itself because there's some specifics to how that technology works that are better addressed at that level.
So you've seen just this past year Colorado pass the first consumer protection bill for AI and that, with some tweaks is being used as a model throughout the country for new bills because ultimately we'll be more successful if we're all kind of aligning of the steps we're taking.
And so I'm working on a version of the Colorado bill with legislators in Connecticut and in Texas and in Virginia and all over the country to try to bring some broad protection to consumers as well.
- Is there a benefit to trying to standardize some of these approaches so that the states aren't necessarily doing things that are at odds with each other?
- Absolutely.
And that's when I think we've had the most success is when we are all talking.
And I think the good news is that legislators from throughout the country have been meeting every two weeks to discuss the latest developments in AI and develop bills together.
So you know, I think we are, I'll speak for myself quite worried about what's coming from the federal government, whether it's inaction on important issues or or things that are actually harmful.
But as the states we've been preparing for this and uniting and working together on unified bills.
- New York is trying to develop a new artificial intelligence technology and use it as part of its Empire AI Initiative, a public-private partnership based out of Buffalo.
What are your expectations for this venture?
Because it's been billed as sort of the end all be all of AI development and New York's gonna be at the forefront of this technology.
- Nothing is the end all be all of of AI development, but it is a really great initiative and you're seeing other states try to copy it now.
California tried to do it this year and failed in legislation creating a cluster of the advanced chips that are needed for AI research and giving that at a discount to universities, to researchers in New York state will not only encourage the kind of research we really want of AI, of safety research and anti-bias research, but will also end up spinning off companies and benefiting the economy.
So I think it's a wonderful way to be forward thinking in terms of the short term applications of AI, I think there's a lot more we need to do when it comes to the long term.
- How do you anticipate this research endeavor will inform some of the regulations that might be born out in the future, 'cause as you mentioned, this is an evolving area.
So if we're the ones developing the technology, will we also be in a better place to develop future regulations?
- My team hates when I say this, but I love nothing more than a study or a paper that changes my mind.
We would be silly not to be looking at the research that comes out of Empire AI and I think that very much will inform what we do going forward.
- So you mentioned earlier that you have this background with computers.
I believe you have a degree in computer science and you know, you're also one of the younger members of the state legislature.
So when you want to take that background and your own interests into these issues, do you feel like you've got colleagues who are receptive to the views and the ideas you want to share and discuss?
Or is it sometimes an uphill battle, like trying to talk to your parents about an app on their smartphones?
- First of all, my parents have come a long way in dealing with their cell phones, so- - Maybe your parents are younger than mine.
- Yeah, no, my colleagues are are very receptive.
I think throughout the country you wanna be talking to experts in whatever you end up legislating.
I happen to know a lot about the technical side of AI, but these decisions should be made as a society as a whole.
And I love learning from my colleagues about how they're seeing AI be implemented in their communities, how they're seeing sometimes the gap in access to these new technologies.
I mean this is a conversation for everyone to partake in and I really welcome what they are bringing, the same way they welcome what I am.
- And what about the actual act of regulating these issues?
We've seen the state attorney general's office take the lead on say social media regulations and they've also been discussing artificial intelligence, but we've also seen the Department of Financial Services in this space as well.
Does there need to be a lead agency focused on say, artificial intelligence or does it make sense to defer to different agencies depending on how the technology's being used?
- I don't think that question is settled yet.
There are certainly times where you might wanna spin up a new agency.
Ideally, I think you'd probably want that at the federal level.
But again, this technology's moving so quickly that locking in a brand new agency at this point, I just don't think that's a settled question.
- And when you think about legislation on this front moving forward, is this an area where it is so big and it is so far reaching that for some of the issues that we've talked about, they're gonna have to ultimately get done as say, language in the budget or some sort of omnibus bill?
Or do you think this is a topic best approached in a piecemeal form?
- Both.
This, if we're spinning up a new agency, if that's the response, obviously that's a budget conversation.
If it's protecting individual uses, that's an individual bill part.
But I think we can still do big things through the normal legislative process.
I mean, one of the things that I am gonna be focused on this session is thinking about those frontier models, the ones that are really pushing the limitation of what we know and what we can predict and making sure that everyone that develops them has at the bare minimum a safety plan in place that they're actually sharing with a third party and following along with.
But maybe a way to do that is not just to say have a safety plan, but actually say if things go wrong in the real world, you're gonna be responsible for what happens there.
Now that is not spinning up a new agency that's not necessarily doing all the rulemaking, it might be a quicker way of aligning incentives and bringing success, but that can happen through the normal legislative process or in the budget.
And I welcome anyone that wants to work on it in any of those times.
- Well, we've been speaking with assembly member Alex Bores, he is a Manhattan Democrat Assembly member.
Thank you so much for making the time, I really appreciate it.
- Thanks for having me.
- And for more on AI priorities heading into the next year, you can visit our website that's at nynow.org.
Exploring Festive Traditions at the New York State Capitol
Video has Closed Captions
Clip: S2024 Ep51 | 11m 36s | Experience winter traditions and festive fun at the New York State Capitol. (11m 36s)
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship- News and Public Affairs
Top journalists deliver compelling original analysis of the hour's headlines.
- News and Public Affairs
FRONTLINE is investigative journalism that questions, explains and changes our world.
Support for PBS provided by:
New York NOW is a local public television program presented by WMHT
Support for New York NOW is provided by WNET/Thirteen.