
Artificial Intelligence Can Hallucinate, Too.
Season 4 Episode 4 | 4m 44sVideo has Closed Captions
How Artificial Intelligence differentiates and hallucinates.
Can artificial intelligence tell the difference between labradoodles and fried chicken? ...probably. But they can also see things that... aren't there.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback

Artificial Intelligence Can Hallucinate, Too.
Season 4 Episode 4 | 4m 44sVideo has Closed Captions
Can artificial intelligence tell the difference between labradoodles and fried chicken? ...probably. But they can also see things that... aren't there.
Problems playing video? | Closed Captioning Feedback
How to Watch BrainCraft
BrainCraft is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipQuick, can you pick all the labradoodles from the fried chicken?
Quick, can you pick all the labradoodles from the fried chicken?
How many labradoodles can you see?
How many labradoodles can you see?
Generally people are pretty good at knowing what we're looking at.
But it's been reported that artificial intelligence struggles to tell the difference between these pictures.
Or between chihuahuas and blueberry muffins.
sheepdogs and mops.
puppies and bagels.
Or dalmatians or ice cream!
Is there something particular about dogs that computers just don't get?
Well, not really!
Researchers have thankfully shown that dogs and food can pretty well be distinguished.
In the dogs versus foods case, algorithms can identify which is which with some 90 percent accuracy.
This is thanks to artificial neural networks, algorithms that are structured in a similar way to the brain.
In the last episode, we explore how artificial neural networks are really good at finding patterns in data.
To learn something, the network takes lots of examples, say, songs with many instruments and vocals, it works out what makes auditory patterns that resemble a voice, then it uses those patterns to isolate a voice among the other sounds.
With images, after a deep neural network has seen thousands of sample dog photos, it can learn what a dog is and identify dogs in new photos as accurately as you can.
Or almost as accurately.
Remember, there was some error in the dog/chicken caper.
The trouble starts when the input signals are just too similar.
If the pattern that says labradoodle is the same fuzzy curly pattern that makes up this sheep skin I bought a few years ago, how is a computer to tell which is which?
It might seem like a trivial problem.
So what if a computer can't tell some dogs from some food?
But this is an example of how hard it is to close the gap between machine and human intelligence.
If we develop incredibly precise algorithms, it might mean that if an example only changes a little, the machine changes its mind about what's in the photo, and struggles to understand that a photo of a dog with a hat is still a dog.
And this is one of the hardest problems in artificial intelligence: common sense.
How can we build machines that have common sense?
So they don't have to be trained on every instance of all the objects and animals that exist?
And coding common sense is not the only problem engineers have to solve.
Another thing that AI does is that it can hallucinate.
It can be tricked into seeing and hearing things that don't exist.
Some researchers tricked a computer to see this cat as guacamole.
This happens because no matter how accurate AI systems get in identifying objects in images they are still vulnerable to what's called "adversarial examples."
Like our cat.
Adversarial examples fool AI because they carry a special pattern of noise from things like lighting or texture that leads to the machine interpreting the image entirely differently.
Here, MIT researchers 3D printed a turtle, that because of altering the pattern on its shell, the artificial neural network sees as... a rifle.
Similarly the texture of this baseball means it's seen as... espresso.
Neural networks can struggle with 3D objects because they're normally trained with 2D images.
Still, this cat photo is recognised as... guacamole.
But when it's slightly rotated, this pattern of noise disappears so it's correctly identified as a cat.
Us humans can pretty obviously recognise these images.
But the machine... sees something that's not there.
To be fair, computer vision has seen significant progress in recent years.
And in some cases it's more accurate than a human's.
But adversarial examples are a big concern for artificial intelligence.
Sure AI is great at seeing at image, but we need to do a lot more work in training AI to confidently recognise 3D objects.
When image recognition is applied to things like driverless cars, a machine hallucinating can have big implications.
For now, here's a final question for you: given that people eventually look like their dogs, are there people out there who look like a blueberry muffin?
Do I look like fried chicken?
- Science and Nature
A series about fails in history that have resulted in major discoveries and inventions.
Support for PBS provided by: