By — Geoff Bennett Geoff Bennett By — Courtney Norris Courtney Norris Leave your feedback Share Copy URL https://www.pbs.org/newshour/show/anthropics-powerful-new-ai-model-raises-concerns-about-high-tech-risks Email Facebook Twitter LinkedIn Pinterest Tumblr Share on Facebook Share on Twitter Transcript Audio Anthropic announced that it has started a very limited test of its newest AI model called Mythos. It's a model deemed so powerful that the company warned it could cause widespread disruption if it were released to the public. Anthropic is giving some companies access to Mythos to test and identify vulnerabilities, a move that is raising concerns. Geoff Bennett discussed more with Gerrit De Vynck. Read the Full Transcript Notice: Transcripts are machine and human generated and lightly edited for accuracy. They may contain errors. Geoff Bennett: Anthropic announced this week it has begun limited testing of its newest A.I. model called Mythos, one the company says is so powerful it could cause widespread disruption if released to the public. Logan Graham, Anthropic: It's just generally better at pursuing really long-range tasks that are kind of like the tasks that a human security researcher would do throughout the course of an entire day.Obviously, capabilities in a model like this could do harm if in the wrong hands. And so we won't be releasing this model widely. Geoff Bennett: For now, Anthropic is giving more than 40 tech companies, including some rivals, access to Mythos to test it and identify vulnerabilities across systems. But even that move is raising concerns.For a closer look at all of this and the implications, we're joined now by Gerrit De Vynck, who covers A.I. for The Washington Post.Thanks for being with us. Gerrit De Vynck, Tech Reporter, The Washington Post: Of course. Geoff Bennett: So help us understand the concern here. What specifically makes this model different from other A.I. models? And why is there so much, frankly, fear around it? Gerrit De Vynck: The specific concerns that are being called out here is that this model is really good at finding gaps in software that hackers could exploit.So, right now, all software has bugs, but software is pretty complicated and you need to really know what you're doing in order to sift through all that code to find something that you could then use to hack into a system.And what Anthropic is saying and some of the independent cybersecurity experts that they have also given access to this model to are saying is that this can essentially do that automatically. It can sift through all sorts of code. Something that might take humans who are very good at this months to do, it can do in minutes or hours.And so the concern here is that if this is sort of out in the public, anyone can use it, that anyone who wants to hack into any kind of software for whatever reason would be able to do it using this technology. And that's why the company is saying at least they're sort of keeping it under wraps for now. Geoff Bennett: Keeping it under wraps, but also giving, as we mentioned, some 40 other companies, including Microsoft and Nvidia, access, in part to strengthen their own cyber defenses.What do we know about that decision? Does it sharing it more widely actually reduce the risk or potentially increase it? Gerrit De Vynck: Yes, I mean, there is a bit of a precedent here in cybersecurity.Often, if one company finds some lack in another company software, instead of just giving it to the public and creating a situation where that other company could be hacked, they will sort of go behind the scenes and say, hey, guys, we found this. You might want to fix this before the rest of the world figures it out.And so I think it's sort of in that tradition that they're doing this. But, of course, some people are saying, hey, now we have all these powerful tech companies that have access to this allegedly extremely powerful tool for cybersecurity. Well, is it also powerful for other things, other things that they could use to increase their business, get an edge on other companies?So there are some complaints that, if this thing is really so good, why don't you let the rest of the world actually see it for themselves and then we can decide what to do with it? Geoff Bennett: Logan Graham, who's one of Anthropic's researchers, suggested that, if this A.I. program were fully released, it could force widespread software updates, eventually exposing weaknesses everywhere.Is that a realistic scenario, or is he in some ways overstating it? Gerrit De Vynck: Yes, potentially.I mean, it's difficult, because, besides these companies, no one has really been able to get their hands on it.And I think we always need to take these big A.I. companies with a grain of salt. It's not the first time an A.I. company has said, oh, my goodness, our new technology is so powerful, we should be afraid of it.You know, it's great marketing, right, because if something is so powerful that it could change the world or cause chaos, it's also very powerful for doing other things. And so I think we need to be careful.I don't -- I'm not necessarily saying that Anthropic is lying or misleading the public here. I'm sure they are very legitimate about these concerns. But I do think that we're already in a situation where cybersecurity is pretty atrocious. I mean, everyone's personal data has been hacked at some point.If anyone really wants to get into a software system, if they have the resources, the incentive, they will probably be able to do it. We already live in a world where software is broken and needs to be updated constantly, right?Every time you open your operating system, it's probably pinging you to update the apps that you have on your computer, right? That's because of the cybersecurity situation we have right now.And in the same way that this Mythos technology could be used to hack into computers, it could also be used to defend against hacks. And so a lot of the cybersecurity experts are saying, look, yes, this is concerning, but we can also use this technology. The good guys can also use it to protect us.And so it doesn't necessarily completely change that balance of power that we have right now. Geoff Bennett: Well, say more about that, because there is this strange disconnect where you have now even the A.I. companies themselves warning about the potential dangers. And this is as the A.I. companies are also racing to release more powerful systems at the same time. What accounts for that? Gerrit De Vynck: Yes, I mean, I think it's very easy to sort of point that and say, like, look, like, what's really going on here?And I think each A.I. company is slightly different. They have different incentives. But it's true. I mean, they are all in this extremely competitive race to build the best A.I. system. It's very expensive to train these things. It costs hundreds of millions of dollars to develop each new version of this A.I. technology.And very few companies are able to do it. And the entire tech industry is in agreement that this is the most important technology to come out probably since the Internet itself. And so there's a huge amount of money that is incentivizing the development of this technology.At the same time, a lot of the people who work at these companies do legitimately believe that there are concerns that it could be used for cybersecurity, it could be used for misinformation, it could -- some people even believe that it could become so smart in the coming years that humans are challenged to keep it under control.And so I do think that those are real beliefs held by some people at these companies. And yet they are locked in this competitive dynamic. Geoff Bennett: Gerrit De Vynck covers A.I. for The Washington Post.Gerrit, thanks again for being with us. Listen to this Segment Watch Watch the Full Episode PBS NewsHour from Apr 09, 2026 By — Geoff Bennett Geoff Bennett Geoff Bennett serves as co-anchor and co-managing editor of PBS News Hour. He also serves as an NBC News and MSNBC political contributor. @GeoffRBennett By — Courtney Norris Courtney Norris Courtney Norris is the deputy senior producer of national affairs for the NewsHour. She can be reached at cnorris@newshour.org or on Twitter @courtneyknorris @courtneyknorris