This time, with feeling: Robots with emotional intelligence are on the way. Are we ready for them?
Researchers are developing robots that use AI to read emotions and social cues, making them better at interacting with humans. Are they a solution to labor shortages in fields like health care and education, a threat to human workers, or both?

A child interacts with Milo, a robot designed to help learners with Autism Spectrum Disorder (ASD) practice their communication and social skills. Image credit: RoboKind
"The robots are coming!" Most of the time, this phrase is a warning, a reference to the machines coming to take our jobs. But robots are also coming simply to help out, thanks to infusions of artificial intelligence (AI) making them smart, social, and capable of interacting with people in a variety of settings.
Robotics and AI came up together in the 1950s, but limitations in AI software and data prevented much crossover between them for most of the last 60 years. For decades, robots were fairly dumb, stationary machines mainly used in industrial settings, created to do simple physical tasks over and over again. Now that faster processors, new algorithms, and big data are powering an AI renaissance, however, researchers are starting to upgrade robots with intelligence—and liberating them from the factory floor.
Today, many robots are being designed to live among us, to be social and adept caretakers, tutors, and companions. And unlike the robots that hollowed out factory jobs in the first wave of automation, the majority of these social robots are being designed to solve worker shortages and assist, rather than replace, human workers. It remains to be seen whether things will play out this way, of course. In the meantime, fears that robots will take jobs that involve human interaction away from people who need them—and can do them better—will likely persist.
One of the labor shortages social robot research is trying to fill is in elder care. By 2050, there will be 1.6 billion people worldwide who are 65 and older, more than double the current population in that age group. The number of people aged 80 and older is expected to more than triple between 2015 and 2050. In the U.S. alone, senior citizens will need an estimated 3.5 million additional health care professionals and workers by 2030.
Given the scale of demand, developers believe that rather than robots being an alternative to human caregivers, the choice may sometimes be between robots and no care at all. But even if that’s the case, the question remains: Can they do the job?
ENRICHME, short for ENabling Robot and assisted living environment for Independent Care and Health Monitoring of the Elderly, is a mobile robot designed to help older people with a range of tasks, from exercising to remembering where they have put things. The robot was tested in retirement homes in three European countries to see if it could help combat cognitive decline and improve quality of life. Early results show that the users accepted the robot, that it helped them be both more cognitively and physically active, as well as solving some difficulties they meet in everyday life, like finding missing items.
This type of robot is part of a new field known as “ambient assisted living,” which uses technology to create environments in which elderly patients can be safer and more independent. ENRICHME’s creator, Nicola Bellotto, a computer scientist at the University of Lincoln in the UK, designed the robot to be both useful when needed and a discrete presence that doesn’t intrude unnecessarily into the elderly users’ lives.
"They were quite happy about the experience with this new technology assisting them,” he said of the test users.
Even stationary robots are finding ways to engage. Take Mabu, a smiling yellow robot holding a touchscreen, that is small enough to live on a side table. The “smart home companion” does things like remind users at risk of heart failure to take their medicine. This helps extend the points of contact that people have with their doctors—a real need given that most patients visit a doctor’s office only every six months and must try to control their medical issues on their own most of the time.
“There is definitely trouble with engaging patients over long periods of time,” says physician Pat Basu, a board member at Catalia Health, the company behind Mabu.
Mabu users check in with the robot every day. As it gets to know them, Mabu can suggest things like low-sodium diet options, recommend calling a doctor, or even make small talk. Cory Kidd, the creator of Mabu and the founder and CEO of Catalia Health, says an iPad loaded with a similar program might work well for a week or two, but the more anthropomorphic interface of Mabu creates a sustainable relationship that improves patient care.
“We are getting information we didn’t have before,” Kidd says.
With advances in medicine, people are living longer—in many cases because they survive events like strokes and heart attacks, which then require long-term physical rehabilitation. For these more complicated interactions, a mechanical doctor or assistant would have to closely follow the lead of human clinicians, for both careful physical interactions and nonphysical ones, like explaining a treatment.
“That’s the kind of interactions we want for our robots,” says Ayanna Howard, a roboticist at Georgia Tech. “We want to mimic the human-human interaction.”

Dr. Ayanna Howard, a roboticist at Georgia Tech, focuses on the development of "intelligent agents that must interact with and in a human-centered world." Image credit: Georgia Tech College of Computing
With Social Robots Come Social Problems
Not everyone is excited for robots. In fact, no sooner had engineers started building them than people started attacking them. Examples of such violent behavior include kicking six-foot-tall robots that roam Walmarts or the smaller delivery bots on some American streets, to an experimental hitchhiking robot getting beheaded in Philadelphia.
The reasons people abuse robots are varied, with experts pointing to anxieties over job security, and fear of robots taking over. One robotics company executive suggested people may even be using them as an “anger management” tool. There may be ways to mitigate this type of behavior, like teaching children how to interact with robots at an early age or giving the machine a name. And while the angry responses to videos of robot abuse show that many people have empathy for the machines and find that type of behavior abhorrent, such abuse may become more common as they become a bigger part of our daily lives.
“It’s the same response that we would see with animals,” Dr. Howard says. “We think it’s wrong.”
Another very human problem that having robots among us has triggered is racism. If a robot is anthropomorphized, with features like eyes, nose, and a mouth, users sometimes assign the robot a race based on the color of the machine. So far, the majority of these anthropomorphic robots are white, and at least one study has shown that people are biased against darker versions.
Not having a variety of differently-hued robots could have far-reaching, and negative, implications—particularly as they become more human-like. If robots become essential in our lives as caregivers, teachers, and companions, and they are all white, this could reinforce cultural stereotypes that associate positive qualities like helping and competence with whiteness. To prevent racism from spreading to our mechanical counterparts, it will be essential for social robots to reflect the communities using them.
Robotos working with kiddos
Social robots are also being developed to work with people on the other end of the age spectrum from seniors: children. Some of the most promising work is being done with kids with Autism Spectrum Disorder (ASD), which is estimated to affect 1 in 68 children in the United States.
Robotic company RoboKind makes Milo, an expressive classroom robot that the company says "never gets tired, never gets frustrated, and is always consistent."
Milo demonstrates various facial expressions and delivers lessons verbally, while a screen on the robot’s chest shows symbols meant to help the student understand what is being said. The robotic ability to repeat something again and again in the same tone without tiring is particularly suited to helping children with ASD learn. Milo is also being used in special education classrooms and showing benefits for children with Down syndrome, ADHD, trauma, and other social or emotional diagnoses.

RoboKind designers say Milo, their classroom robot, ‘never gets tired, never gets frustrated, and is always consistent,’ qualities that educators who work with ASD learners value. Image credit: RoboKind
“The robots can do some things people can’t do,” says Richard Margolin, the CEO of RoboKind.
Beyond helping children with special challenges, robots are being designed to help tutor and encourage learning more generally. RoboKind, for example, has created a curriculum for the Milo robot to teach coding skills. Another project named Minnie out of the University of Wisconsin, Madison is designed to encourage reading with middle-schoolers through a back and forth interaction.
In each case, the approach is centered on providing a social component similar to a study buddy or tutor—not a substitute teacher—but with extra patience built in.
Artificial emotional intelligence
Despite the promise of helpful robots taking shape in labs today, their real-world impact has been minimal thus far. Their skillsets tend to be quite narrow, which means they may be good at doing one thing, but fail at being useful in holistic ways. The more generalized home robot companies—producing bots like Jibo and Anki—have failed. It’s not clear the market is ready for social robots to take off in a big way.
What’s missing that might help bring social robots closer to what sci-fi stories have dreamed up? More feeling. Even AI-enhanced robots that can learn about and interact with their primary users may still lack an understanding of human emotion. If we want tech that is more like us, however, it needs to know about feelings and learn how to make meaningful connections with humans. To solve this problem, some researchers are teaching robots to recognize emotions through nonverbal cues such as facial expressions, and then react appropriately.
Emotional intelligence hasn’t been a priority in previous work in robotics and is just now starting to catch up, says Gabi Zijderveld, the chief marketing officer at Affectiva, an emotional AI company.
“It was never part of the fundamental design thinking,” Zijderveld says. “Design was very much focused on the IQ element.”
Affectiva was the first business to market “artificial emotional intelligence.” The company has algorithms trained on nearly six million faces pulled from 87 countries and can detect seven emotions: anger, contempt, disgust, fear, joy, sadness, and surprise. Most customers are using the technology for webcam analysis, but Affectiva’s technology is also starting to make its way into personal robots.
Could this ability help make new relationships between humans and machines possible? And is that something we should want? Many people bemoan the internet age and smartphones for making us less social, and at least one study has found that “digital addiction” can lead to increased anxiety, depression, and loneliness. But social AI developers like Cynthia Breazeal argue that it doesn’t have to be that way.
“That’s an issue of design,” says Breazeal, head of the Personal Robots group at the MIT Media Lab “It’s not a technology that should just be about information.”
Just like emotional intelligence is no longer being overlooked, encouraging socialness can be made a priority in robots. Depending on the needs or desires of users, interactions with robots might end up combining aspects of those we have with teachers, companion animals, and friends. While the end result might not be quite the same as human friendships or partnerships, it could still be fulfilling.
“It’s a different type of relationship that opens people up to interacting in a different way,” Breazeal says.
To make this type of companionship work, robots would need some kind of understanding of what makes humans tick. With anywhere from 60 to 80 percent of human interactions based on nonverbal communication, robots would have to learn to pick up on cues like facial expressions and body language that come naturally to people.
“Being smart also means having emotional intelligence,” Zijderveld says.
It’s a lesson AI developers struggling to endow machines with social and emotional skills most humans take for granted are learning all too well.