Deepfakes are infiltrating the 2024 election cycle. Just how will this impact voters? Misinformation experts Sam Gregory and Claire Wardle discuss what’s at stake, both politically and technologically. This conversation is part of the WNET series “Take on Fake” which analyzes fake or altered video, images and audio to debunk the viral spread of misinformation and get to the truth.
>>> WE'VE ALREADY SEIZE DIGITALLY MANIPULATED DEEPFAKES INFILTRATING THE 2024 ELECTION CYCLE.
JUST HOW WILL IT IMPACT VOTERS?
SAM GREGORY AND CLAIRE WARDLE, WHO IS CO-DIRECTOR OF THE INFORMATION FUTURES LAB AT BROWN UNIVERSITY ARE DISINFORMATION EXPERTS, AND THEY'RE JOINING HARI SREENIVASAN NOW TO DISCUSS WHAT'S AT STAKE POLITICALLY AND TECHNOLOGICALLY.
AND A NOTE.
THIS CONVERSATION IS PART OF THE WNET STREAMING SERIES "TAKE ON FAKE," WHICH ANALYZES FAKE OR ALTERED VIDEO, IMAGES AND AUDIO TO DEBUNK THE VIRAL SPREAD OF MISINFORMATION AND TO TRY TO GET TO THE TRUTH.
AS YOU'LL SEE, WE HAVE USED GRAPHICS TO IDENTIFY THE FAKES AND TO HELP PREVENT THEM FROM BEING USED FURTHER IN A MISLEADING WAY.
AT A TIME WHEN THERE ARE SO MANY CRITICAL CHOICES, OUR GOAL IS TO HELP YOU SEPARATE FACT FROM FICTION.
>> CHRISTIANE, THANKS.
SAM GREGORY, CLAIRE WARDLE, THANK YOU BOTH FOR JOINING US.
YOU ARE BOTH EXPERTS IN STUDYING MISINFORMATION AND DISINFORMATION.
WE WANT TO HELP OUR AUDIENCE UNPACK NOT JUST SOME EXAMPLES, BUT MAYBE WHAT THEY CAN LEARN FROM HOW TO PROCESS INFORMATION SO THAT THEY DON'T GET TAKEN FOR A RIDE, ESPECIALLY DURING THIS ELECTION YEAR.
CLAIRE, I WANT TO START WITH YOU.
IN THE LONGER ARC OF MISINFORMATION, WHERE IS IT WHEN IT COMES TO DISINFORMATION OR MISINFORMATION ONLINE?
BECAUSE AROUND ELECTION YEARS, LIES ARE PRETTY COMMON.
AND MAKING THE OTHER TEAM OUT TO BE HORRIBLE IS JUST PAR FOR THE COURSE.
BUT HOW IS THIS CHANGING IN THIS LIDGE TALL LANDSCAPE?
>> THE WORLD THAT WE LIVE IN NOW MEANS THAT IT'S MUCH EASIER THAN EVER TO CREATE THIS KIND OF CONTENT, AND MUCH, MUCH EASIER TO SPREAD IT.
BUT YOU'RE RIGHT.
LIES, AS HUMANS, THAT'S SOMETHING THAT WE'RE USED TO.
BUT WHAT WE'RE NOT USED TO IS THE AMOUNT OF CONTENT AND HOW FAST IT CAN SPREAD.
>> SAM, I WANT TO TELL OUR AUDIENCE A LITTLE BIT ABOUT YOUR WORK AT WITNESS.
IT'S A HUMAN RIGHTS ORGANIZATION THAT TRIES TO USE VIDEO TO DEFEND PEOPLE'S HUMAN RIGHTS.
BUT YOU'RE ALSO TRYING TO USE TECHNOLOGY TO MAKE SURE THAT VIDEOS ARE NOT UNDERMINING THOSE BASIC RIGHTS AS WELL.
AND YOU'VE GOT YOURSELF A DEEPFAKE RAPID RESPONSE FORCE, FOR EXAMPLE.
HOW DO YOU -- WHAT ARE THE TOOLS THAT ARE AVAILABLE TO YOU TO TRY TO VET THESE VIDEOS THAT CAN BE GENERATED SO QUICKLY, AS CLAIRE SAID?
>> A REALLY COMPLICATED ANSWER.
IT'S SUPER EASY TO CREATE CERTAIN FORMS OF PHOTO REALISTIC AND AUDIO LIKE THE BIDEN ROBOCALL.
BUT IT'S NOT SUPER EASY TO TECHNICALLY DETECT THEM YET.
AND THAT CAPACITY ISN'T WITH MOST ORDINARY PEOPLE AND MOST JOURNALISTS.
WHAT WE WORK ON REALLY IS HOW DO WE BRIDGE THE GAP THAT'S EXISTING NOW IN TERMS OF THE TECHNOLOGIES AND TOOLS THAT ARE AVAILABLE TO OUR FRONT LINE OF DEFENSE AGAINST MISINFORMATION AND DISINFORMATION BECAUSE THE TECHNOLOGIES AREN'T THERE YET.
I WOULD ECHO WHAT CLAIRE SAYS.
WHEN WE START TO THINK ABOUT HOW WE DETECT AI, WE NEED TO START WITH RECOGNIZING THAT THIS BUILDS ON PREVIOUS PROBLEMS, AND WE NEED TO BUILD ON PREVIOUS SOLUTIONS AS WELL.
>> I WANT TO SHARE A COUPLE OF EXAMPLES OF IMAGES THAT WERE CREATED USING AI TOOLS THESE WERE GENERATED BY THIRD PARTIES NOT ASSOCIATED WITH THE TRUMP CAMPAIGN AT ALL.
BUT JUST IN THE PAST COUPLE OF WEEKS, THERE WERE THESE PHOTOS HERE OF FORMER PRESIDENT TRUMP, WHO IS IT LOOKS LIKE A CHRISTMAS PARTY WITH A BUNCH OF AFRICAN AMERICAN VOTERS.
THEY LOOK LIKE THEY'RE HAVING A GOOD TIME.
AND IF YOU WEREN'T CAREFUL ENOUGH TO LOOK AT THE HAT AND SEE THE MISSPELLINGS AND TRY TO LOOK FOR SOMETHING, YOU WOULD HAVE THOUGHT WOW, HE IS AT A CHRISTMAS PARTY WITH A BUNCH OF AFRICAN AMERICANS.
AND THEN THERE WAS ANOTHER PHOTO OF HIM WITH A GROUP OF YOUNG MEN.
NOW, I WANT TO SAY, A CAVEAT TO OUR AUDIENCE THAT WE'RE TRYING TO BE CAREFUL.
WE'LL DEFINITELY BE FRAMING THESE ONLINE AND ON AIR WITH VERY VISUAL, EASY TO SPOT CAPTIONS.
WE DON'T WANT TO AMPLIFY DISINFORMATION.
AND CLAIRE, I WANT TO ASK, LOOK, WHAT IS THE HARM IN THESE?
ON THE ONE HAND, WE KNOW THAT THESE WERE NOT REAL HUMAN BEINGS.
ON THE OTHER HAND, LOOK, PRESIDENT TRUMP DOES HAVE SUPPORT FROM REAL LIFE AFRICAN AMERICANS, RIGHT.
SO WHERE IS THE DANGER HERE?
>> THE FIRST THING I'LL SAY IS THIS WAS GENERATED BY AI.
BUT THE SAME KIND OF CONTENT COULD HAVE BEEN CREATED WITH MORE BASIC EDITING.
FIRST OF ALL SAY THAT THE SECOND THING, IF WE AS A SOCIETY SAY IT DOESN'T REALLY MATTER, DOES IT?
HE DOES HAVE SOME FRIENDS.
THIS COULD HAVE BEEN THE CASE.
THEN WE KIND OF LOSE THE FOUNDATION ON WHICH WE'RE ALL MAKING DECISIONS AND UNDERSTANDING THE WORLD AROUND US.
EVEN IN THESE KIND OF EXAMPLES, WHAT'S THE HARM, THERE IS HARM IN THE IDEA THAT WE DON'T KNOW WHAT TO TRUST.
IF PEOPLE JUST GO IT DOESN'T MATTER.
SO I THINK WE HAVE TO LABEL, WE HAVE TO MAKE IT CLEAR WHEN THIS IS AI-GENERATED OR IF IT'S BEEN PHOTO SHOPPED, IT'S IMPORTANT THAT WE KNOW AND HAVE AN ACCURATE HISTORICAL RECORD OF WHAT ACTUALLY DID HAPPEN OR NOT.
AND THAT'S WHAT WE HAVE TO KEEP REMINDING EACH OTHER.
MANY INFORMATION ACTORS DO NOT HAVE A VERY BIG AUDIENCE.
THEY MIGHT BE A NICHE AND HAVE A COUPLE PEOPLE THAT FOLLOW THEM.
WHAT THEY'RE DESPERATE IS THE MEGAPHONE THAT THE MEDIA BRINGS.
SO MUCH OF THE TACTICS AND TECHNIQUES IS NOT PARTICULARLY CLEVER USE OF TECHNOLOGY.
THE VULNERABILITY IS HOW CAN I GET THE MEDIA TO COVER IT, HOW CAN I GET THE OUTRAGE, HOW CAN I GET PEOPLE TO HATE, LIKE IT ON TWITTER?
AND THAT'S WE HAVE TO BE CAREFUL ABOUT.
ULTIMATELY HAVING OUR BRAINS HIJACKED.
THE ATTENTION ECONOMY IS ALL ABOUT THAT.
AND UNFORTUNATELY, THAT IS A BIGGER PROBLEM I THINK THAN THE TECHNOLOGY ITSELF.
>> CLAIRE, I WANT TO ASK A LITTLE BIT ABOUT THE BIDEN ROBOCALL THAT BECAME QUITE FAMOUS BEFORE THE NEW HAMPSHIRE PRIMARIES.
THAT'S A LONG WAYS AWAY FROM THE GENERAL ELECTION.
I THINK IT WAS THE FIRST TIME A LOT OF PEOPLE UNDERSTOOD HOW GOOD THE TECHNOLOGY WAS.
AND I WANT TO ROLL IN A CLIP OF THIS AUDIO HERE.
AND THIS IS REALLY NOT JOE BIDEN'S VOICE, AGAIN, FOR OUR AUDIENCE.
THIS IS A PIECE OF AI-GENERATED AUDIO THAT IS NOT THE PRESIDENT.
>> WHAT A BUNCH OF MALARKEY.
WE KNOW THE VALUE OF VOTING DEMOCRATIC WHEN THE VOTES COUNT.
IT'S IMPORTANT THAT YOU SHADE YOUR VOTE FOR THE NOVEMBER ELECTIONS.
WE'LL NEED YOUR HELP.
VOTING ONLY ENABLES THE REPUBLICANS IN THEIR QUEST TO ELECT DONALD TRUMP AGAIN.
>> THAT IS ALLEGEDLY THE VOICE, WHICH IS NOT THE VOICE OF JOE BIDEN, ASKING PEOPLE NOT TO GO OUT AND VOTE.
I MEAN THAT'S A PRETTY POWERFUL TOOL WHEN IT COMES TO A CLOSE ELECTION, WHETHER IT'S IN NEW HAMPSHIRE, THE GENERAL ELECTION.
SO, CLAIRE, I WONDER, WHAT ARE YOU STUDYING WHEN IT COMES TO HOW AUDIO IS BEING USED TO MANIPULATE PEOPLE?
BECAUSE HONESTLY, FOR MOST PEOPLE, IF IT WASN'T THE CONTENT THAT WAS A LITTLE SUSPICIOUS, I WOULD HAVE THOUGHT THAT WAS JOE BIDEN'S VOICE.
>> YEP.
AND AS SOME HAVE SAID, UNFORTUNATELY, WE SEE GENERATIVE AUDIO MESSAGES USED QUITE FREQUENTLY, BECAUSE ACTUALLY EYES ARE VERY GOOD, BUT OUR EARS ARE WORSE.
WE DON'T HAVE GOOD CHECKS IN TERMS OF SAYING IS THAT REAL NORTH?
WHEN IT COMES TO SCAMS, IN OTHER COUNTRIES YOU GET A CALL AND IT SOUNDS LIKE YOUR SISTER OR MOM, THEY'RE IN TROUBLE.
SO WE HAVE TO WORRY ABOUT WELL-KNOWN VOICES.
BUT THE OTHER CONCERN HERE IS BY THE FAMILY AND FRIENDS, I WOULD ARGUE IN ELECTION CONTEXT, THE WORRY IS THE USE OF LOCAL TRUSTED MESSENGERS.
MAYBE A LOCAL FAITH LEADER WHOSE VOICE GETS USED TO SAY I WOULDN'T BOTHER VOTING, OR I'M WORRIED IF YOU VOTE IT MIGHT BE DANGEROUS.
SO THAT DOES MEAN IN THIS AWARE OF HOW OUR COMMUNITIES MIGHT BE POTENTIALLY AT RISK THROUGH SOME OF THESE TECHNOLOGIES, NOT MAKING PEOPLE OVERLY CONCERNED, BUT TO SAY THAT IS A POTENTIAL THREAT.
LET'S BE AWARE IF YOU HEAR SOMETHING, DOUBLE-CHECK IT BEFORE YOU JUST TRUST IT IMPLICITLY.
>> SAM, WHAT'S YOUR TIP TO PEOPLE, ESPECIALLY WHEN IT COMES TO AUDIO?
WHY IS IT SO HARD FOR US TO DISCERN THOSE FACTS FROM FICTION?
>> THE FIRST THING WE SAY, AND UNFORTUNATELY ALL THE BLAME OR PRESSURE ON THEM TO DETECT A LITTLE GLITCH IN THE AUDIO OR SPOT SOMETHING IN ONE OF THE IMAGES WE JUST SAW.
THAT'S NOT THE RIGHT STRATEGY IN THE LONG RUN.
THESE KEEP GETTING BETTER.
THERE ARE GLITCHES.
IF WE LISTEN CLOSELY, THERE ARE THINGS WE MIGHT HEAR AS AN EXPERT.
FIRST OF ALL, WE HAVE TO TAKE THE PRESSURE OFF PEOPLE TO SAY LOOK OUT FOR THE GLITCH.
THAT MAKES IT HARD BECAUSE THAT MEANS THE FIRST STRATEGY WHEN WE LISTEN TO SOMETHING IS JUST TO LISTEN CLOSE.
IT DOESN'T NECESSARILY WORK WELL.
IT'S PARTICULARLY HARD, THEN, TO DO NEXT STAGE WITH AUDIO, WHICH IS FOR EXAMPLE TO SEE IF THIS COMES FROM A MANIPULATED ORIGINAL OR SOMETHING ELSE, FOR EXAMPLE, THE DONALD TRUMP AI IMAGES BEFORE, YOU COULD DO A REVERSE IMAGE SEARCH.
YOU CAN'T DO THAT WITH AUDIO.
WE DON'T HAVE A WAY TO SEARCH FOR OTHER AUDIO SOURCES.
>> SAM, YOU'VE ALSO BEEN LOOKING INTO WHAT YOU CALL RESURRECTION DEEPFAKES.
WE SAW THIS REALLY BEING PLAYED OUT IN THE INDONESIAN ELECTIONS.
EXPLAIN WHAT THIS AI TREND IS.
>> THIS IS A TREND TWO OR THREE YEARS.
WHEN YOU RECREATE SOMEONE WHO HAS PASSED AND USED THEM TYPICALLY FOR POLITICAL PURPOSES.
SO WE'VE SEEN RESURRECTED JOURNALISTS STATE VIOLENCE IN MEXICO, DECEASED PEOPLE FROM THE PARKLAND SHOOTINGS TALKING FROM BEYOND THE GRAVE.
AND MOST RECENTLY IN THE INDONESIA ELECTION, ONE OF THE PARTIES BROUGHT BACK THE FORMER INDONESIAN DICTATOR, SUHARTO, TO ASK THE PEOPLE TO SUPPORT THE PARTY.
AND IT'S VERY COMPLICATED, BECAUSE IT'S ALL ABOUT TAKING SOMEONE'S PREVIOUS PRESENCE AND MAKING THEM SAY THINGS THEY NEVER SAID IN REAL LIFE.
IT GOES BACK TO A LOT OF ISSUES THAT REALLY WE NEED TO GRAPPLE WITH WHEN WE LOOK AT AI IN THE PUBLIC DOMAIN ABOUT CONSENT, WHO CONSENTS TO THE USE OF THESE IMAGES, AND TURNS THEM INTO THIS AUDIO AND THIS VIDEO, AND DISCLOSURE.
>> THERE IS RECENTLY A DEEPFAKE THAT WENT AROUND.
AND I WANT TO PLAY A CLIP OF IT.
THIS IS FROM PAUL HARVEY, WHO WAS A RESPECTED BROADCASTER, WHO WORKED IN THE UNITED STATES FOR DECADES.
HE HAD A REALLY SIGNATURE SOUND.
AND HE MADE A SPEECH ONCE, WHICH WAS COMPLETELY MANIPULATED FOR POLITICAL GAIN HERE.
LET'S ROLL THIS.
>> AND ON JUNE 14th, 1946, GOD LOOKED DOWN ON HIS PLANNED PARADISE AND SAID I NEED A CARETAKER.
SO GOD GAVE US TRUMP.
GOD SAID I NEED SOMEBODY WILLING TO GET UP BEFORE DAWN, FIX THIS COUNTRY, WORK ALL DAY, FIGHT THE MARXISTS, EAT SUPPER, THEN GO TO THE OVAL OFFICE AND STAY PAST MIDNIGHT AT A MEETING OF THE HEADS OF STATE.
SO GOD MADE TRUMP.
>> CLAIRE, ORIGINALLY, PAUL HARVEY HAD MADE THAT SPEECH, AND IT WAS GOD MADE FARMERS.
IT WAS A COMPLETELY DIFFERENT SPEECH THAT HAS BEEN ALTERED.
PAUL HARVEY DIED IN 2009.
THIS WAS NOT DONE WITH HIS CONSENT.
AND I WONDER HOW MUCH OF THIS NOTION OF NOSTALGIA AND EMOTION FACTORS INTO WHETHER A PIECE OF MISINFORMATION OR DISINFORMATION SEEMS MORE BELIEVABLE.
>> WE KNOW THIS FROM PSYCHOLOGICAL STUDIES AND THE WAY OUR BRAINS WORK WE RELY VERY MUCH ON HEUERISTICS, PARTICULARLY AT A TIME WHEN WE'RE OVERWHELMED.
WHEN WE HAVE HEARD A VOICE BEFORE OR IT REMINDS US OF SOMETHING, EVEN IN MISINFORMATION, THE MORE YOU SEE SOMETHING, EVEN IF IT'S FACT-CHECKED, THERE IS SOMETHING TO THAT.
SO THIS KIND OF PLAYBACK, THIS NOSTALGIA, OH, I'VE HEARD THAT VOICE BEFORE AND I TRUSTED IT BEFORE, ALL OF THAT IS EXCEPTIONALLY POWERFUL.
SO WE'RE SEEING A PATTERN HERE OF RELYING ON ANOTHER TIME AND THE WAYS THAT PEOPLE HAVE FEELINGS AROUND EARLIER POLITICAL MOMENTS THAT WERE LESS CHARGED OR FIGURES THAT PEOPLE HAD RELATIONSHIPS WITH.
THAT'S WHAT'S HAPPENING HERE.
AND I WOULD ARGUE IT'S EXCEPTIONALLY POWERFUL.
>> SAM, WE'RE ALSO STARTING TO SEE POLITICIANS USE TECHNOLOGY AS SORT OF AN EXCUSE TO COVER UP THINGS THAT ACTUALLY MIGHT HAVE HAPPENED.
THERE WAS A LINCOLN PROJECT VIDEO, AND IT SHOWED -- WELL, LET ME JUST ROLL THAT CLIP HERE.
>> HEY, DONALD, WE NOTICED SOMETHING.
MORE AND MORE PEOPLE ARE SAYING IT.
YOU'RE WEAK.
YOU SEEM UNSTEADY.
YOU NEED HELP GETTING AROUND.
AND WOW.
>> AN -- >> SAM, WHAT'S INTERESTING IS RIGHT AFTER THAT COMPILATION OF VIDEOS THAT THE LINCOLN PROJECT TEAM HAD ASSEMBLED OF PRESIDENT TRUMP LOOKING THESE WAYS AND TAKING THESE EXCERPTS, HE WENT ON THE HIS OWN SOCIAL MEDIA PLATFORM AND SAID, LOOK, THIS IS, QUOTE -- THESE ARE LOSERS.
THEY'RE USING AI, THAT THIS IS ALL FAKE TV COMMERCIALS.
AND I WONDER WHETHER OR NOT THIS KIND OF PLAUSIBLE DENIABILITY CHANGES OUR UNDERSTANDING AND EXPECTATION OF WHAT IS REAL AND WHAT IS NOT.
HOW DO WE MAINTAIN SOME INTEGRITY THAT THERE IS A REAL FACT VERSUS WHAT'S MANIPULATED?
IT SEEMS THAT IF YOU SEE 20 OF THESE EXAMPLES, AFTER A WHILE, JURY GOING TO ASSUME IT'S ALL BAD.
>> THIS IS A PHENOMENON WE'RE SEEING GLOBALLY OF THIS PLAUSIBLE DENIABILITY.
AND IT REALLY RELIES ON THE FACT THAT PEOPLE ARE OFTEN CONFUSED WHAT AI CAN DO, AND THEY'RE CONFUSED ABOUT THEIR ABILITY TO DETECT OR DISCERN IT.
SO IT'S INCREDIBLY EASY FOR PEOPLE IN POWERFUL WHEN THERE IS SOMETHING COMPROMISING TO SAY HEY, AI COULD HAVE MADE IT.
HEY, AI IS CAPABLE OF THIS.
TO SOME EXTENT THAT'S NOT TRUE.
SOME EXAMPLES WE SEE ARE PEOPLE EXPLOITING OUR FEARS OF AI VERSUS THE REALITY.
BUT IT ALSO TIES INTO PEOPLE'S VERY DEEP SENSE THAT MAYBE THEY WERE FOOLED BY THE POPE IN THE PUFFER JACKET, A SENSE THAT MAYBE WE CAN'T DISCERN.
THIS IS REALLY CHALLENGING PHENOMENON ALSO BECAUSE IT'S VERY EASY TO SAY YOU CAN'T BELIEVE THIS IMAGE, THIS AUDIO, OR YOU CAN'T BELIEVE ANY IMAGE OR AUDIO, BUT IT'S INCREASINGLY HARD TO CONCLUSIVELY PROVE THAT SOMETHING WAS MADE WITH AI.
ONE OF THE EXPERIENCES WE'VE HAD IN DEEPFAKE'S RAPID RESPONSE FORCE, WE'LL GET CASES WHERE SOMEONE HAS BEEN CAUGHT ON A COMPROMISING TAPE, AND THEY SAY SOMETHING, AND THEN THEY INSTANTLY COME OUT AND SAY THIS WAS MADE IN AI WHEN IT BECOMES PUBLIC.
AND IT MAY TAKE SEVERAL DAYS FOR EXPERTS TO VERIFY THAT IT 90% LIKELY TO BE AI-MADE OR 90% LIKELY TO BE AUTHENTIC.
MANNED THE GAP THE PUBLIC HEARS AI CAN BE USED TO FAKE ALMOST ANYTHING.
AND IT DOES I THINK START TO UNDERMINE OUR TRUST.
>> SAM, WHAT IS YOUR TIP?
WHETHER IT'S A SEASONED JOURNALIST OR ONE OF CLAIRE'S STUDENTS, WHAT DO YOU SAY TO SOMEBODY WHO IS TRYING TO VERIFY A FACT?
WHAT ARE THE MOST SIMPLE TOOLS THAT YOU SUGGEST THAT THEY USE AND WHAT'S THE MIND-SET THAT THEY SHOULD APPROACH IT WITH?
>> SO I ALWAYS DESCRIBE THAT WE NEED TO GO BACK TO THINKING ABOUT HOW AI ADDS ON TO THE MEDIA THAT YOU'RE SEEING IN OUR VERIFICATION SKILLS WE'RE SEEING ALREADY.
IT COMPLICATES THAT.
I USE THE ACRONYM WHICH IS SIFT, WHICH IS STOP, DON'T LET YOUR EMOTIONS CARRY YOU AWAY WHEN YOU SEE SOMETHING THAT SEEMS TOO GOOD TO BE TRUE, THEN I, INVESTIGATE THE SOURCE.
TRY TO FIND OUT WHERE IT COMES FROM.
SEE IF ANYONE ELSE IS COVERING IT, FIND ALTERNATIVE COVERAGES AND SEE IF YOU CAN FIND THE ORIGINAL.
MANY OF THE EXAMPLES WE'RE LOOKING AT, IF WE TRACED BACK THE DEEPFAKES WOULD SEE THEY CAME FROM A SATIRICAL SITE.
IF WE LOOK AT THE TRUMP IMAGES ON A REVERSE SEARCH, THEY HAD NEWS COVERAGE AROUND THEM.
BY DOING THOSE STEPS FIRST, SO WE DON'T HAVE TO ALL DO THE SAME WORK OF TRYING TO BE FORENSIC ANALYSTS, I THINK IT'S ABSOLUTELY CRITICAL AND IS LEARNING FROM OUR EXISTING GAP, AND IT'S ONE OF THE THINGS WE'VE BEEN CALLING OUT IS A GAP IN ACCESS THE MORE TECHNICAL TOOLS FOR A BROAD RANGE OF JOURNALISTS TO DO THE ANALYSIS AND KNOW HOW TO EXPLAIN TO IT THE PUBLIC.
THAT'S SOMETHING WE'VE GOT TO ADDRESS AS AI TOOLS GET BETTER, AND THAT ALSO REQUIRES PUTTING THE ONUS ON PLATFORMS AND OTHER PEOPLE WHO ARE CREATING THE AI TOOLS TO MAKE IT AS EASY AS POSSIBLE BOTH TO DETECT THE PRESENCE OF AI, TO BE ABLE TO LABEL IT, AND ALSO TO BE ABLE TO AUTHENTICATE THE REAL.
I DON'T WANT TO PLACE PRESSURE ON THE INDIVIDUAL TO BE A FORENSIC ANALYST, BUT WE SHOULD PLACE PRESSURE ON NEWS ORGANIZATIONS TO DO THIS THEMSELVES.
WE NEED TO FIND MUCH BETTER WAYS TO MAKE AI DETECTABLE, EASIER TO LABEL IT WHEN WE SEE IT IN THE TIMELINES OR ENCOUNTER IT IN THE WILD.
AND ALSO MAKE IT EASIER TO AUTHENTICATE THE REAL, TO SHOW WHEN A PARTICULAR THING IS MADE IN A TIME AND PLACE.
IF WE DO THOSE THINGS TOGETHER, WE'LL BE A MUCH MORE RESILIENT PLACE.
>> CLAIRE, WHERE ARE WE IN THAT CONVERSATION WITH THE TECHNOLOGY PLATFORMS AND THE BIG TECHNOLOGY COMPANIES THIS ARE CREATING THESE RULES IN THE FIRST PLACE, WHETHER THEY ARE ABLE TO ADD APPROPRIATE WATERMARKS THAT, YOU KNOW, ARE EASY TO TRACE OR FIND, OR WHETHER, IF YOU'RE GOOGLE, CAN YOU UPDATE THE CHROME BROWSER TO AUTOMATICALLY FLAG THAT THIS IS A SYNTHETIC PIECE OF MEDIA?
>> ABOUT TWO WEEKS AGO, OPEN AI LAUNCHED A NEW TOOL THAT ALLOWS YOU TO WRITE A SENTENCE AND YOU GET A 60-SECOND VIDEO.
THE FIRST I SAW LIKE MANY PEOPLE WERE SCROLLING IN AWE OF WHAT HAD BEEN CREATED.
THE SECOND TEN MINUTES, HOW IS THIS ALLOWED?
IT'S LIKE PUTTING A REALLY FAST CAR THAT CAN DRIVE TWICE AS FAST ON THE INTERSTATE BUT THERE IS NO RULES OF THE ROAD.
I FIND IT ASTONISHING THAT THIS CAN BE ROLLED OUT WITHOUT ANY KIND OF THOSE SAFEGUARDS.
AND WE KNEW THIS WAS COMING.
SO WE CAN'T JUST, YOU KNOW, PUT A WATERMARK ON THAT CAN BE PHOTO SHOPPED OUT.
WE NEED REALLY SIGNIFICANT AND SOPHISTICATED TECHNOLOGY THAT WOULD EMBED THOSE KIND OF MARKINGS.
BUT THE IDEA THAT THEY CAN LAUNCH THESE NEW PRODUCTS WITHOUT THAT IS ASTONISHING.
AND WE DON'T HAVE A REGULATORY FRAMEWORK RIGHT NOW.
IMAGINE A NEW FOOD FOR A NEW CAR, WE'D HAVE TO HAVE THAT.
THESE GUYS ARE CREATING ALL SORTS OF THINGS.
SO I AM CONCERNED THAT IN THIS VERY SHORT TIME PERIOD BEFORE THE ELECTION, THESE NEW TOURS ARE BEING ROLLED OUT AT A PACE THAT IS MUCH FASTER THAN THE SPEED OF WHICH WE AS THE CONSUMERS CAN CATCH UP AND ADAPT.
>> I'M NERVOUS.
I'M WORRIED THAT WE'RE PANICKING, AND OUR PANICKING IS DRIVING HASTY ACTIONS AND A REAL DEGRADATION OF TRUST.
AND IN SOME WAYS, I'M ALSO EXCITED I COME FROM A CONTEXT OF THINKING ABOUT THE CREATIVITY OF VIDEO AND IMAGES.
SO THERE IS POTENTIAL THERE OVERALL, I'M NERVOUS, AND I WANT US TO PREPARE MUCH MORE ACTIVELY, AS CLAIRE SAID, BUT NOT TO PANIC, BECAUSE THAT PLAYS INTO THE HANDS OF PEOPLE WHO WANT TO USE THESE TOOLS MALICIOUSLY.
>> CLAIRE WARDLE, THE INFORMATION FUTURES LAB AT BROWN UNIVERSITY, AND SAM GREGORY, THE EXECUTIVE DIRECTOR OF WITNESS, THANK YOU BOTH FOR JOINING ME.
>> THANK YOU VERY MUCH.
>> THANK YOU.