Connections with Evan Dawson
Are you ready for augmented reality?
3/23/2026 | 53m 1sVideo has Closed Captions
AR may reshape daily computing, says Barry Silverstein on AR, VR, and AI’s future impact.
Augmented reality (AR) is poised to transform daily computing, says Barry Silverstein, former Meta Reality Labs CTO. Now leading University of Rochester’s Center for Extended Reality, he discusses how AR, VR, and AI could reshape how we work, learn, and interact.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Connections with Evan Dawson is a local public television program presented by WXXI
Connections with Evan Dawson
Are you ready for augmented reality?
3/23/2026 | 53m 1sVideo has Closed Captions
Augmented reality (AR) is poised to transform daily computing, says Barry Silverstein, former Meta Reality Labs CTO. Now leading University of Rochester’s Center for Extended Reality, he discusses how AR, VR, and AI could reshape how we work, learn, and interact.
Problems playing video? | Closed Captioning Feedback
How to Watch Connections with Evan Dawson
Connections with Evan Dawson is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship>> From WXXI News.
This is Connections.
I'm Evan Dawson.
Our connection this hour was made in September of 2024, when meta CEO Mark Zuckerberg took the stage at an event to show off one of Meta's new products.
He said it was called the meta Ray-Ban display, and he wanted to show how easy these AR glasses are to use AR standing for augmented reality.
The glasses, Zuckerberg said, would replace the keyboard, the mouse, the touchscreen, the buttons, the dials.
Instead, you can send signals from your brain with little muscle movements that the neuro bands pick up, allowing you to silently control your glasses.
Want to read a text thread?
The thread can pop up within the display in your glasses.
No need to check your phone.
And that's just the beginning.
Zuckerberg got a big cheer from the audience when he asked his AR glasses to play California Dreamin by the Mamas and the Papas.
The song began to play, and then Zuckerberg made.
He made a little motion with his hand, as if he were turning up the volume dial, and suddenly the volume on the song surged.
The audience applauded.
Here was a product responding to what the user was thinking.
No need to touch physical knobs anymore, Zuckerberg showed how someone wearing the meta Ray-Ban display could watch videos on their glasses while walking through a park.
That got another cheer from the audience.
But that video, that whole presentation left me wondering, do people actually want this?
Do we want to be on a screen when we walk through the park?
Will it make a better grocery shopping experience or more annoying?
One well beyond Mark Zuckerberg's prediction that almost everyone will delight in AR glasses and be constantly on smart screens connected to our brainwaves, there are some interesting use cases in professional settings.
AR glasses can help surgeons.
It can diagnose problems faster.
It could lead to more precise machine work.
The list is long now, if you want either the meta rayban glasses today or really any AR glasses, there's a range here, but your pain starting around 300 bucks all the way up to about 800 bucks.
Most technology starts out expensive, and then the price declines as widespread adoption takes hold.
So are you in the market for AR glasses?
Do you want this?
My guest this hour is the former senior director of and chief technology officer of Optics and Display at Meta Reality Labs.
He reported to Mark Zuckerberg annually.
He was at the forefront of some of the biggest developments in this field.
And now he's at the University of Rochester, leading the new center for Extended Reality.
Barry Silverstein Silverstein is my guest.
Boy, I did this yesterday.
Barry.
Yeah.
It's great to have you in studio.
Thank you for taking the time to be with us, Barry.
and Dr.
Silverstein spent some time how many years at meta, actually.
Barry?
>> eight years at meta.
>> Eight years at meta.
And you've been at the University of Rochester now for how long?
>> about two and a half months.
>> So you are new to the university and the center for Extended Reality is the project.
What is the goal of the center?
>> So the goal of the center is to develop, help develop the ecosystem of human, human computer connection.
And essentially we expect this to be through AR augmented reality in combination with what's called contextually aware A.I.
So this is where where, where artificial intelligence is aware of your surroundings, what you're doing and your, your basic human nature and combines and provides you the connection to, to computing that is, that is more authentic and more tied, tied to the human form instead of what we currently do, which is adapt to our technology.
>> When you and I spoke yesterday, we, we talked about how there are different applications, there will be professional applications.
I'm when I think about A.I., when I think about AR, I am most optimistic in the field of medicine.
As a layperson, just standing on the outside, I'm less optimistic in sort of personal use.
But there's different reasons that people would use this.
So we're going to discuss that this hour.
In general, take away the professional use.
The average person wearing glasses, walking through the park, walking down the street.
Do you think people want this?
>> I do, I think it is more streamlined way of, of, of interacting that that can keep you present in the world if it's done correctly.
So we pull our phones out of our pockets regularly throughout the day.
And we're, we're in front of computer screens for, for most of the day for, for a great number of people.
And this is to get the assistance of computing to be able to do our work or our socializing, our personal, personal objectives.
And there's, there's essentially no reason to have to adapt to those technologies.
Rather with AR, you should be able to have that technology available to you when you need it, when you want it.
And of course, also not forcing it upon you when you don't.
And that's, that's the key to doing it right.
Doing it.
Well.
>> I, I think you're making an important point about the way the culture has changed with smartphones.
People are on them all the time.
The frustration that people feel at dinner or in groups of friends when people are not paying attention.
I'm, I'm envisioning a different kind of frustration where you and I are talking, but you're wearing display AR glasses.
And I'm wondering, are you focused on me or are you are you reading the text thread in your screen here?
I mean, is that going to be a problem?
>> Yes.
That will.
Yes, yes, that will be a problem.
So the question is how do we manage it?
What is the what is the appropriate way to manage this?
and this becomes a social dynamics issue.
and, and a tuning of, of our A.I.
assistant.
So, if, if something like that, if some information popped up in my screen and I'm talking to you and it's distracting me, the wearer, then the A.I.
should sense that I really didn't want that and learned to adapt and, and prevent it from happening at the same time.
if it's contextually aware and knows that I'm in a social situation with you and we're having an important conversation, it also should represent that, that the context of the situation and again, prevent that from happening.
But, but of course, making that technology happen the way that we want it, the way that society needs it to happen so that it's good for humanity is a challenge unto its own.
Well.
>> I think you raise an important point that maybe we should hit right from the start here, which is when we think about how to create the technology that is not going to be the frustrating disruptor, but will actually enhance our lives, improve our lives.
One of the things that we've spent the last several years with, the really since September 2023, and ChatGPT and the way the public has gained consciousness of the speed of A.I.
If you work in tech, you've known A.I.
is coming.
But until 2023, the general public wasn't thinking about A.I.
in the way that we are now.
I think we probably can agree on that.
And the fear that people have that A.I.
is going to disrupt our lives in in negative ways.
One of the things we heard talked about was, well, you know, you got to have the right people in charge of these companies.
They've got ethics departments.
And you have this wry smile as if to say, you're looking in the wrong direction for how to manage technology ethically.
So how do we do this?
And take me through how you see the dynamic of for profit companies who are creating products, and us hoping that they're going to be ethical products or products that will make our lives better?
>> Yeah.
So, so this is a really important point.
And one of the reasons that I joined the University of Rochester to, to help create the center at a university, one of the pillars of the center for extended reality is, is called the social impact pillar.
And this is, this is less about the technology and more about the implications of the technology and what is good for society and humans.
We need to focus on what is good for our physical health, what is good for our mental health, and what is good for our social health.
And this is not something that you can expect corporations to to be doing on their own.
and I'm not sure that we would actually want them to be doing that on their own.
>> Sam Altman says they're going to try to do that on their own.
>> Well so it's good.
It's good when a company tries, I'm not going to argue that considering ethics and morality from, from the get go is a good is a good principle of a corporation.
However, it is not the, it is not the fundamental principle of a corporation, which is which is shareholder value.
Okay.
And there's a conflict of interest there.
So.
So we shouldn't expect that to happen when it happens, great.
We should reward it.
But but we shouldn't expect it to happen.
And rather rather, this is something that needs to be tackled from, from organizations that are responsible for society, which tends, which tends to be a universities are let's call it benign organizations, ones that are supposed to be data driven and do research to determine what is good and bad.
And governments, in fact, are supposed to be responsible for for the legislation in society to establish the boundaries by which corporations and individuals act.
And so but they need information to do that.
And the university is in the position to, to be able to do to provide some of that information.
>> Well, for listeners and viewers who are wondering, aren't we talking about HR?
We're talking a lot about A.I.
In a moment, I'm going to ask our guest to kind of take us through terms here, but it's under the tech umbrella that and certainly A.I.
is used in HR.
It's going to be working sort of in tandem.
So we'll talk about that in a second here.
But I will say one other point that I want to hear you discuss, which is that when we talk about wanting to shape tech to make our lives better, to be ethical, to be used for good instead of ill, I take your point that when shareholder value is in the equation, you can't just rely on the anthropic and you know, the meta's and whatever.
I will credit anthropic and people like Jack Clark and Dario Amodei have done a lot to at least say, here's where our products have behaved in ways we didn't expect.
Here's ways that drew alarm and been more transparent than at least I've seen from some other companies.
The question then becomes, is it a cultural fix or is it a regulatory fix?
Do you think government needs to step in and say, here are the rules for the game, for A.I., for AR, for whatever new tech comes out?
Or do we need as a culture to discuss what we demand and how to steer that through our purchasing, through our habits, et cetera.?
>> Yeah, I think it's the latter.
>> So it's not the government.
>> So no, it is.
Well, so in theory the government should be should be doing, doing what's based.
Society is based on is telling them to do right.
So, so we have a democracy and, and we vote for our elected officials based upon the policies that we, we support as individuals.
And so, so it becomes socially driven, individually driven, and then collectively driven to, to the policymakers to, to establish this, why you can't rely on on, on corporations to do this on their own is that they are individuals as well, right?
So, so they have, they have a perspective, you know, depending upon the company, it might be a perspective of one or maybe ten on a board, which is, which is, which is.
And those ten on the board are not representative of society.
The representative of, of that corporate culture.
So so that is, that isn't necessarily what it is in the best interests of a society.
And frankly, societies vary.
Right?
So even in our own country, societies vary in terms of what we consider acceptable.
And most of these companies are worldwide.
So, so we can't, we can't have a corporation might have different guidelines for different countries and the United States historically had led in creating legislation to, to manage the new technologies.
And, you know, I'd like to see that come back.
>> Do you think that we as a society can steer tech in a way that will benefit us and not harm us, or not degrade the quality of our lives?
>> Well, the the hard part is education, right?
So understanding what the technology, what technologies are coming, okay, so that you can get advanced warning and address this and do the studies so that so that people have the data that can be shared with them on, on the implications of what the technologies are.
Because until I share what you would use these for, you wouldn't have any idea, right?
Because no individual can think of all of the ways this can be used, good or bad.
And all technology tends to be possible to do good things and bad things with it.
We've all seen that throughout history.
So, so the key here is, is enough knowledge and early, early knowledge and transparency so that we so that we can get ahead of it rather than behind it.
>> When you came to the University of Rochester, this is a big get for the University of Rochester.
I'm not trying to puff up your ego here, Barry, but I mean, you are a big name in the field.
And the University of Rochester was, I think, rightly proud of bringing you in and bringing you to campus and having you have a chance to lead.
You've been in the room with some of the most powerful people.
I don't know if you think we're we're over describing the Zuckerbergs and the Altman's of the world.
To me, there's a small group of people who are the most powerful people in tech and therefore in the world right now.
And I wonder if you think I mean, do you think Mark Zuckerberg is like a well-adjusted guy?
You're comfortable with him being in a position of power?
>> so I don't know Mark well enough to be able to, to comment on him as a, as an individual.
>> You met with him once a year or something like.
>> That, once a year.
And it was mostly to show demonstrations to him of the technology that we were developing, developing.
It was really not about a discussion.
you know, about about the future of the company or the future of society.
We didn't, we, we never got into any of that.
So, so I can't really say.
So I'm an observer as, as you are an observer.
And and, you know, we each make our own decisions as to, as to what we see and how we feel about it.
>> But you didn't see any red flags that made you think, I don't know that I want to be working for this guy, or I don't know if I want him to have the power.
>> I don't want any individual to have the power, period.
So it's not this is not about Mark Zuckerberg or meta.
This this is equally concerned concerning with Google and Apple and, and all of the corporations.
>> So what do we do about that in society?
>> so again, we need to do research by which we determine the impacts of, of the technology so that there's data available for policymakers to, to do the will of the people and, and what's good for society.
So, so we need to decide that ourselves.
What's acceptable?
I you know, there's, there's interesting historical things, you know, being, being of a certain age.
I remember when cameras and cell phones were new, okay.
And there was incredible concern about people having cameras everywhere.
Right.
And there were actually signs on bathrooms at one point that said, no, no cell phones and no cell phone use in, in bathrooms.
You don't see those signs anymore.
>> Because it's just assumed that that's how we behave.
>> Because it's assumed that that's how we behave.
Now, there are people that break those rules.
Okay regardless, and, and hopefully you, you hold them accountable.
And so there's social norms and then there, then there for the real extreme things that are loss, right?
So, so you know, and women should be particularly concerned typically around, around these things of cameras all over the place because, because historically, some of these, some of these images have been problematic.
>> Yeah.
Related to that, here's the BBC reporting on a recent piece related to privacy, unsolicited photos and the targeting of women with AR glasses.
And this is what the BBC writes.
Smart glasses billed as the future of wearable technology are having a resurgence.
But there are concerns these products are being used to harm, humiliate and infringe on the privacy of women.
One woman in London says she was filmed by a man using smart glasses, which have in-built cameras, without her knowledge or consent.
The video was then posted on social media, getting about a million views and hundreds of comments, many of them sexually explicit and derogatory.
I had no idea it was happening to me, she said.
I didn't consent to that being posted.
I didn't consent to being secretly filmed.
It really freaked me out.
It made me feel afraid to go out in public again.
So Barry, what you're what you're saying is there are different frameworks for dealing with this.
Take that privacy issue.
What should we do about that privacy issue?
>> Well, so so there are there are technical solutions and there are legal solutions.
So if you deconstruct this problem.
So so the first, the first thing, the last thing that you said was there were there were people commenting, making sexual, sexual comments on, on this.
Well, is that an appropriate behavior that we should that we should allow in this in our society.
>> That should be shamed.
>> That should be shamed, but.
Right.
>> But I mean, X still exists.
You've you've seen I mean, it's proliferating.
>> I agree.
So, but what it means is that our, that our current legislation is, is, is not up to the challenges that, that already exist.
And this is only going to exacerbate it.
Okay.
>> But people will say that's regulating speech and you can't, you know, you got to be very careful about regulating speech.
>> You do.
But if you recall in the 50s it was, it was illegal to say a curse word on, on, on, on TV or on the radio because that was a, that was a socially open area.
You're not allowed to go cursing down the street.
Right?
So, so if you go yelling, yelling, curses down the street that would, that would be an infringement of, of, of, of people, of the other people's rights of, of having a peaceful area.
So, so wouldn't you say this is, this is an infringement on this woman's piece?
I would so, so the question is, are we legislating that correctly?
And I would argue no, we have not handled the existing technology well.
>> All of which is to say, it seems to me that your position as someone who's helped develop this technology is you recognize the risks and the problems, but you don't think that the problems that we see or perceive are enough to say, throw the tech out, you know, ban the product, et cetera., there there are ways you think we can do better in dealing with.
>> So, so banning technology never works.
Banning advancement never works.
Okay, so, it will be created okay.
And it will be used.
and.
>> We can't.
>> Stop it and, and there is no stopping it.
There is only managing it.
So, so, so the question is, how do you manage it?
Because there are good uses, right?
So, so there, there are good uses for this that are valuable uses.
I know when I have the glasses on at at a, at a soccer game where my, where my grandchild is playing.
I love it.
I have the glass, I have it available.
I can take pictures.
It's great.
Or if I'm, if I'm going to, to a family party, it's great.
>> Well, how can you use it at a family party?
>> So, so I don't have to pull anything out of my, my hands.
I can, I can ask it to take a picture and it will take a picture.
I could ask it to post a picture and it'll post a picture.
So with this Ray-Ban displays, I could see the picture before I post it.
So this is this is valuable, okay, because it's more freeing when it's used correctly.
So and as, as, as you shared earlier, right, in professional life, there's, there's really valuable applications.
So it's going to happen.
And you know, you could say, well, in a corporate, in the corporate world, in the enterprise world, we had personal computers and you know, we didn't use it badly.
Okay.
Some people did, but, but, but, but you know, in general, there were corporate guidelines on how you use a computer in, in the enterprise use.
And now now it's enterprise use.
You're going to say, well, now I can't I can't allow that product to be sold for public use because, you know, people might use it badly.
That's not going to happen.
Right?
So it's going to be there.
So, so the question is how do we address it?
>> I, I think I mostly agree with you, your assessment that we're not going to be able to stop tech, whether it's AR or whether it's A.I.
However, A.I.
pulls really badly in this country.
It actually pulls better in Europe.
>> And China too.
They trust it in.
>> China, it pulls much better in China.
A.I.
is pulling terribly in this country, probably because some of the people bringing A.I.
are saying, by the way, this could wipe out 20 to 80% of jobs and you can't stop it, and it's in the next five years.
I don't think that's a good PR campaign.
>> It's not particularly good for society, is it?
>> I would think so that they would maybe rework the way they're selling it, so I can understand why it pulls badly.
But if people are this scared of it and they're thinking this could disrupt society in ways that like we haven't seen before, why couldn't we stop it?
>> why couldn't we stop it?
You know, I've never really thought of that because I didn't think it was possible.
>> You don't think it's possible?
>> I don't and, and the sales of the products and the adoption of the products will tell us everything.
So we all have personal choice of not of not to buy it and not to use it.
Right.
Sure.
So and, and one would say freedom of choice is, is, is, is, is something that is valued in this country in which also just means it's going to happen because some people will choose to use it.
>> Well, for sure.
And, but AR glasses are one thing.
A.I.
we could say, well, I'm not going to use it.
I'm not going to use it.
However, to your point earlier about when when you're worried about serving your shareholders, companies are going to use it to reduce their workforce and they're already doing that.
Yes.
So the vast majority of Americans could say, I'm not going to use A.I.
I don't want it.
And they could still end up out of work and their lives totally disrupted.
Absolutely.
And you're saying we can't stop that anyway?
>> No, that's right.
You can't stop that.
So, so but we should be having discussions about it.
what are we going to do socially to make sure that that it's okay and okay for society and okay for individuals?
how are we going to manage our lives again?
I'll go back to the one of, one of the visions of the Center for Extended Reality is to help manage those things.
So in this, this goes across you know, so there's the job space, but there's also the education space.
so when students have have access to all of the world's information on their heads at all times they're going to rely on it.
Okay.
And if they rely on it, then they don't learn and their critical thinking goes out the window.
And essentially we, we defy people.
Okay.
and, and this, this becomes a social challenge if we also have people who are not having jobs any anymore than people, if humanity loses purpose, this is also something to be frightened of.
Because when people lose purpose society tends not to go, well.
>> I agree.
>> So, so the, the answer is not to eliminate the A.I.
It is to manage the A.I.
So if you're managing the A.I.
for a student, you basically make it a tutor instead of instead of an information resource, a direct information resource so that so that it tracks what the student is doing and helps them learn critical thinking.
So and this can be this is a way to scale teaching so that so so that individuals can have individualized resources that are tailored to their learning styles.
So that's a, that's a plus.
But if we don't do it, of course it will mummify people.
And then we will have, we will have problems.
>> Are you already working with students at U of R?
>> so not not yet.
Okay.
Only two and a half months in, I figured I will be working with students in the center, will be working with students, but really, the way the center operates is the center brings the problems of this, of the space to the departments that already exist.
and encourages them through understanding and learning about the problems to do research on it.
And I will work to help bring in dollars and support for these topics to be, to be studied.
So in some ways, I'm a glorified fundraiser.
>> But you know, when you talk about the mummification risk certainly anybody who's been in a classroom as a teacher at the K through 12 level or at the higher level, understands what the last three years have been like already having to reroute how we test, how we examine.
And we've had some really tough conversations with teachers who are very frustrated because they see very bright students who are outsourcing their critical thinking.
Not not all of them, but to your point, you can use this stuff to level up and learn things.
You can become better at almost any task in the world.
It is amazing.
It's amazing.
Right?
But you can also decide, well, now I don't have to learn those things.
I will just let the A.I.
do it.
And my concern, my concern is the balance is going to be off.
Barry.
My concern is that the people who are leveling up through A.I., through AR, through the ways we can use tech, walking around the world, that's in the single digit percentile.
My concern is the vast majority of us are going to say, I don't need to read books anymore.
I don't need to learn this.
And the corrosive effect?
It's hard to measure if that.
Do you share that concern or do you think that's pessimistic?
>> So so well, so I'm a pessimist at heart.
Okay.
>> You're a pessimist.
>> I'm a pessimist at heart.
So.
So the answer is that that people, people will always use technology for for, for, for bad.
And that and even to their self detriment.
>> Why did you work in tech to develop.
>> This stuff if you're a pessimist?
>> so, so I'm an optimist when it comes to technology.
Okay.
>> You're a pessimist about people.
>> I think I'm a pessimist about people's ability to, to use it correctly, even myself.
Right?
So the human nature is lazy.
Okay?
And so, so if you, if you could take an easier route, you will.
Okay.
And the question is, but what's good for you?
And, and sometimes what is easiest is not, is not the best thing for you.
And it takes some willpower or some other things to, to be put in place so that you don't do self-harm.
Okay.
And, and I, you know, so, so cat videos, right?
No, I really, I'm all about dog videos, but, but, you know, I get stuck in a loop of dog videos.
Yes.
I'll get stuck in a loop of dog videos.
Okay.
Is it good for me?
The answer is no.
Okay.
Is it humorous for the moment?
Is it a nice distraction for for 30s?
Yes.
Okay.
But but you know, we're all wired for these kind of behaviors.
So so the answer is how do we, how do we put systems in place to protect ourselves from ourselves as well as the technology?
Because, because, you know, if, if we go, if we go to a buffet every day, we're going to overeat that we're wired to do that.
Okay.
So, so, you know be prepared for, for a future famine.
Eat.
Okay.
>> Well, you're saying that we don't ban buffets even though.
>> No, but, but the the point being is that what, what our instincts tell us to do is not always the best thing for for for for our, our, our physical or mental health.
And so you need to have things in place that helps with that.
And in the case of education, at this point, those tools are not in place for the teachers.
We've we've released this with the A.I.
why I'll call wild wildly, without without thinking ahead, ahead of time, what the implications are for, for students, right?
And it's put the teachers in a bind where they have to try and figure it out themselves.
Not that that's a horrible thing in terms of having the teachers involved of trying to do that, they need to be involved because they're the ones with experience in the area.
They're the experts.
but you sure you sure would like to have that have that software in place.
Okay, that, that and those regulations in place through, through study first, before, before you, you, you wildly enable this for where everyone has broad access.
>> So our company's releasing products in general in the AR A.I.
tech spaces too quickly.
>> Yes.
So yes, they are, I mean, it is widely available and that's making the speed of progress.
fast, which is which in some ways is very interesting.
It's but, but you know, we're all, we're all sort of being, and this is historical of technology is you know, the society becomes a little bit of a guinea pig on this and the problems, then the way it works now is you release the technology.
not all the problems are thought about.
New problems arise.
And then after the fact, you try to correct.
Okay, but sometimes it's hard to put the genie back in the bottle.
>> But it shouldn't be that hard to predict.
Some of the some of the problems are hard to predict.
Some of the problems should be obvious.
And an example is, I take your point that you're using AR glasses to go to a a grandchild's game and say, I can take photos, I can post, I don't ever have to take my eyes off the field.
I can do all these things seamlessly.
Yeah.
I'm also thinking about Elon Musk's company and the product.
They're Grok product, for example.
So you can't do this with Claude.
You can't do this with ChatGPT.
But but on Grok, you could put in, you could give Grok photos of people, usually women, sometimes children and Grok will undress them in realistic ways.
There are lawsuits about this that blow my mind that we're even talking about.
>> Should you should you even have to have lawsuits about this?
>> I would think that this is.
>> Obvious, right?
But now I'm thinking if I if, if that same company is creating an AR glasses, I don't want it walking around society with the capability of undressing people through the tech that's sitting in the glasses.
Yeah.
Should we be worried about that?
>> the, the answer is yes.
So if, if there is something that that could be done bad with the technology, someone will figure out how to do it.
Okay.
And, and so the, the goal is, is to collect our, our resources to, to establish what those things are and do studies on them and or provide guidance to provide guidance to our policymakers that that makes that prohibited.
Okay.
And, and I think, I mean, essentially, we're kind of failing at that at this point because there is no collective that is, that is, that is evaluating or at least well enough and providing guidance.
And there is no legislation occurring in this country around restricting any of this stuff.
So other countries are having this.
I sure like to see it in this country, but there's, there's the counter thing, which is the fear that if we put restrictions in place, we'll fall behind in the A.I.
race.
>> Of course, the competition.
>> And I don't, I don't buy that.
>> So you don't.
>> I don't buy that.
Why so, so there is there is what you could do behind, behind the doors of, of, of the tech companies.
And then what you, what you enable to happen wide open.
So you could certainly develop, continue to develop A.I.
and be a leader, be a leader in it without enabling.
>> Putting products in the marketplace.
>>, without putting all of the things in the marketplace that, that enable complete free freedom of usage.
So I think, I think you can, you can do some level of restriction there, but you have to do it carefully.
So, so and, and it takes knowledge, okay.
And it, it takes willpower.
and desire and probably probably in some, some fashion pushing back on, on the, on, on the tech companies, which is not an easy thing to do because, because because they, they are important companies for the country.
>> So when we come back from our only break, I'm going to ask Barry to tell us more about the, if the glasses are going to be part of the future, which I would have said two years ago, maybe these things are not going to take off.
Barry thinks they will, and we're going to talk.
I want to know more about how the glasses work.
I really want to understand how our guest sees them in professional work applications.
What are the best applications for AR glasses?
What does he think that they might be able to accomplish for society that we can't currently do?
and for those who saw the price tags and we got some notes about, well, no one can afford these things or a lot of people can't afford these things.
$300 on the low end, $800 on the top end.
I'm curious to know what Barry Silverstein says about the future of how commonly available, for example, augmented reality glasses could be, and Barry knows he is.
He's the director of the center for Extended Reality and a faculty member at the Institute of Optics at the University of Rochester.
Former senior research director and chief technology officer of Optics and Display at Meta Reality Labs.
We're coming right back on Connections.
I'm Evan Dawson Friday on the next Connections, my colleague John Campbell joins us talking about the rather hot debate in Albany about car insurance and whether New York state should change.
Who is eligible to sue for damages after a car accident based injury?
My colleague Brian Sharp talks about a development project in the city of Rochester, and we meet the star of a film, a gentleman with down Syndrome, talking about his story.
>> Support for your public radio station comes from our members and from Excellus Blue Cross Blue Shield, working with members to find health coverage for every stage of life, helping to make care and coverage more accessible in more ways for more people across the Rochester community.
Details online at excellus.
Bcbsga and Mary Cariola center supporting residents to become active members of the community.
From developing life skills to gaining independence, Mary Cariola, center transforming lives of people with disabilities.
More online at.
Mary Cariola.
Org.
>> This is Connections.
I'm Evan Dawson phone lines haven't been working this week.
We've just been doing a kind of a system reboot.
But if you want to communicate with the program, I'm getting emails, Connections at wxxi.org, Connections at wxxi.org, or you can chat with us on YouTube.
What is that?
Producer Megan Mack you can leave us a voicemail, but that's maybe that feels like we're way back in time.
My, my son has told me my 14 year old has said nobody makes phone calls anymore.
Dad.
He's fascinated with landlines, tech changes.
And that's a big part of what we're talking about.
Catherine and Rochester writes to say, oh my goodness, AR glasses.
I'm a hard no.
I can see where it would help in medicine.
But the potential for this to be abused and misused by humans is massive.
We can't do cell phones, right?
How would this not be even worse?
We need more human connection and this would likely further isolate us from each other.
Never mind all the privacy issues already discussed.
Catherine says no.
Four times she said no, I'll just take the one there.
So a couple points.
We're going to talk medicine in a moment or other applications.
But part of what she's saying is human to human connection is at risk here.
And a lot of surveys show how disconnected we we feel from each other despite technology, levels of depression are higher, isolation are higher.
Do AR glasses further isolate us or can they connect us more?
>> they can go either way.
And this this depends upon us really how we decide the A.I.
is A.I.
assistant is going to work with us.
if we leave the A.I.
assistant in charge certainly, certainly I don't think it'll go well if the humans in charge, and, and we can be self recognize and drive the A.I.
effectively for our good.
Then then then I think it can go well.
So if we get upset about being isolated by our device and the A.I.
understands that we're upset, then the A.I.
would, should back off and, and provide us more time for our, for our social network and advise us, hey, you haven't talked to a friend in a while and help enable that.
Okay, so, so imagine if this was a human being assistant, okay.
>> a significant other.
Okay.
Which is, which tends to be an assistant to our lives.
if you train, if you're trained on each other, you're, you're, you become an assistant at each other.
There's observations that are made that says that can read your face and say, you know, something's wrong.
What's wrong?
Okay.
and after time, being aware of your your circumstances they can actually infer what's wrong.
So A.I.
should be able to do this and then help solve it for you.
Okay, so you know, you haven't talked to your family in a while.
Okay.
so maybe, maybe, you know, let's schedule a time with your family, right?
and prompt prompt you to, to do that and then support it without distraction.
Right?
So no reason that that an A.I.
assistant couldn't do that.
Okay.
There's no reason that it couldn't be done for the good., just as we evaluate and help each other as friends, or partners that do that, do this level of observation at all times.
Same thing for a great mentor at work.
There could be a great mentor, or it could be a horrible mentor, a driver or someone who's taking away your control.
Or it could be a mentor who is, is you know making you better all the time.
Well, there's no reason that we can't make A.I.
and design A.I.
So, so that it becomes that level of support or we can let it turn your brain into mush, right?
So, so that is a choice, okay, that we have as technologists and as society, what we demand.
So we, we don't have to have smartphones.
We choose to have smartphones and we choose whether or not we pull them out and what we do with them.
Okay.
Like I said, sometimes we don't choose wisely.
And sometimes it's good to have someone else observe for us.
Okay.
And, and a true assistant, a true valuable assistant, knowing what is good for humanity with data can provide their proper assistance.
You know in a way that is effective.
>> So describe for folks how this tech works now and how you think it will develop AR glasses.
So the first thing I thought of was, well, I need reading glasses.
A.M.
I putting these on over or can they be reading glasses with the A.I.?
You can figure all that stuff out.
Everybody can wear these glasses.
They can be effective readers, but they have technology display screens here.
Tell people how it works for now and where it may go.
>> Yeah.
So so the, the technologies, I'll call it rudimentary, even though it's not because essentially what you're trying to do is put all the technology that's already in a cell phone.
And in some ways, what you'd like to hit is also including what's already in a self-driving car into a pair of glasses, which weighs you know, 30g, which is really a technical, a very big technical hurdle, but the glasses have have audio.
They have cameras and the display glasses have a display in them.
And, and we try to get as close as possible to, to a regular pair of glasses.
Form factor.
They have a means of, of communicating with your with your phone, which acts as the, as the compute and and real communications device.
and it has in general multiple microphones around it to be able to pick, pick up sound around you.
and, and so in the, in the basic sense today, you, you can voice command or with Meta's wristband, you can, you can command with your hand just by, by small motions for the glasses for the display glasses to, to do particular things, whether it's take a picture or scroll through.
>> Turn the volume up as.
>> You turn the volume up, or you can touch the side of the glasses, there's, there's controls on the glasses itself.
or you could do it manually.
You can take a picture by by clicking on the glasses.
so you could do it auditorily with your hands or without touching the glasses or with touching the glasses.
So this is multiple means of control.
and, and, so, so the contextually aware part is you could do what you could do with a phone, which is, which is, hey, tell me about this building.
Okay.
And, and it'll take a picture of the building and tell you about the building, you know, without having to pull anything out of your pocket.
>> So you're walking down the street, you're on, you're on vacation, you see something, you just you look at it, you give a you give it a short command or some kind of a prompt.
>> Give it a prompt.
say, what am I looking at?
And it'll tell you what you're looking at.
And tell me more about that.
>> And you'll get used to reading the display on your glasses screen, on the inside of your glasses screen.
>> Yes.
>> So you'll get.
>> Used to that.
So you get used to that.
Right now it's single.
It's a single display in one eye, two, two displays, stereo displays that will be on their way.
that's as the technology improves and eventually we'll be making it so that you can do an overlay of, of the display onto the real world itself.
So which, which means that the information won't be like a display in the glasses.
It will be an overlay on top of, on top of real world objects.
So if you're looking at a building, it'll start putting the description at the building because it knows you're looking at the building and the descriptions being written on the outside of the building.
Okay.
or you're doing, doing something near and, and you know, the, the highlighting of your text will be done on your text.
And so it'll be overlaid on the object.
So this will be basically doodling on the world.
>> Oh, okay.
And so maybe in that spirit, I remember a presentation probably four years ago in which someone was describing glasses in which they said, well, if you like fishing, you're sitting on your couch at home, you put your glasses on.
Now you look down below your feet and you're sitting on a boat and you are able to go fishing in your living room without ever getting on the water.
It will be an experience that will feel very much like fishing.
It will look like fishing.
You will have that same sort of satisfaction, but you will just be sitting on a couch.
Is that more VR?
Is that AR.
>> That is more VR.
So you're putting yourself in an alternative or alternative world.
>> Are we going to go in that direction too?
>> Yes.
So so we're already in that direction.
Okay.
you know, so for people who use VR and that continues to, to improve, I.
>> Mean, do you want to do stuff?
I'd rather just.
>> Go fishing.
>> So but not everybody can go fishing.
>> I'm with you there.
Oh, I'm.
>> 100% where this tech has application that I can see is for the disabled community.
Amazing things can happen, I would think.
Yes, but but for average, for average people without disability who are thinking I might go fishing today, but I might just stay on my couch.
I'd rather you go fishing.
>> yes.
So if we have a great A.I.
assistant, it'll say, hey, rather than just playing the fishing game in your living room, why don't we plan a trip?
Okay, so so that would be a great A.I.
assistant.
So because because hey, it's easier to just sit on my couch and we might, might more easily do that.
>> Fair enough.
Now, but in medicine, it's VR or AR, that's AR in medicine.
>> So, so both, so both are used in medicine, VR is generally used for training.
So scenario scenario training, which is what VR is commonly used for in enterprise applications at this point.
>> So.
And then AR in actual application, what's the best use that you can either currently see or envision with AR and medicine.
>> So well, the best use I've seen so far is when, when a doctor is doing surgery and the MRI and CT scans are overlaid on top of the patient and the and the the the system is guiding the surgeon on where to do the incisions and, and where, where they need to reach to, to the problem areas in the body without having to fish around because, because they have a direct overlay right on the body.
I think that's, that's just amazing.
And they can do that.
They could do that today, just not well enough.
>> But that's, that's coming more commonly, right?
That'll get, that'll get better.
>> That so at the Center for Extended Reality, we have a goal of making that better to the point where it is, it is an overlay that is, it is, you know so rich in data that, that, that, that the doctor can really completely trust it.
>> That is amazing.
And you're at the Center for Extended Reality again, if you're just joining us.
Barry Silverstein comes to the University of Rochester, spent eight years at meta, and now you're at this incredible medical research facility, as you know.
Yes.
So I, I gotta imagine the pipeline into working with medicine is pretty direct there.
>> This, this is one of my, one of my favorite targets.
So, so in the center we have four pillars.
One is the hardware which we talked about the glasses, the wristbands, et cetera.
the second, the second one is making sure that the system is perceptually correct.
It matches the human way of doing things rather than being mismatched, which causes discomfort.
The third, the third one is applications and the applications that that we're going to focus on in the university are education and medicine.
And the reason that these these two are really important for the university is, first, the university is really strong in these areas.
So that that helps match what the capabilities are in there.
But the other one is these are some of the most sensitive applications.
So if you want to talk about privacy issues and data management issues and impact on, on, on our human being, right?
So this is all addressed in medicine and education.
These are the, these are the places where, where we become, where, where we, our humanity comes out the most.
If you can do those well.
And you've created an ecosystem with privacy and security, data management, et cetera.
my hope is, is those same techniques that you use for those spaces get carried over into our social spaces so that we can do those as well.
Right?
We can manage it just as carefully.
So, but in medicine in particular, there are three areas that that that I'm excited about.
The first one was training.
I shared that.
So training, do something that you need to do as a scenario.
That's great.
the second one is practice.
I shared the surgical practice.
Okay.
The third one is, is, is diagnostics and trending.
So what I mean by that is so the way medicine works now is a doctor will do a treatment and then you come back for a follow up and tell them how the treatment went.
And it's all qualitative, okay.
They might run a new set of tests.
They run some new blood work, but they don't know what you're how your everyday life is actually impacted by, by this this situation.
So if the glasses have, have the ability to, to measure your before and after.
Okay, then they can determine if a treatment, a treatment is actually effective or whether it's just placebo effect.
Okay.
Remarkable.
And so studies should be able to be done, but so just imagine neurological diseases, Parkinson's, right?
You give, you give a patient a drug and then you have to they come back and you ask them, well, you know, how's your shaking been?
How are you sleeping, et cetera., et cetera.
But but instead, but but instead they measured the vibration of, of emotions throughout the day.
And it determines and they determine where, where, where things are working, where things are not by how much this is going to, this is going to change the implications for how doctors treat and how patients patients react.
Absolutely amazing.
And then, and then we also have the things like our watch, right?
So the Apple Watch people are measuring their hearts.
so if you have this biological measurements on you that are, that are Richard and gesture gesture you know, your, your heart health, right?
They're looking at blood oxygen and blood sugars and facial expressions as well.
Mental health is expressed through facial expressions and, and movements throughout the day.
if you, if you have that information doctors a doctor can do trending and determine very early on whether or not there's something going on.
So, so there could be pretreatments rather than post treatments.
And this could save both money and, >> And more importantly.
Earlier intervention, which will help help help humans be healthier over and have, have a better quality of life.
>> Let me read two comments before we lose the hour.
And then will you just come back once in a while here and update us?
>> I'm happy for invitations.
>> I really appreciate you being here.
Mike says I have a blind autistic son that recently received a pair of glasses.
They allow him to read on his own without braille.
It is amazing and I am so happy for him.
>> Exactly.
So I saw I saw a video of a of a blind a blind girl who, who couldn't sit in a conversation with people who, who couldn't sign, couldn't sign.
And all of a sudden she was given the pair of glasses.
And it was directly it was directly closed captioning, captioning what was happening on around her.
And she was now part of the crowd as opposed to sitting there isolated.
>> Down to our last minute.
Isabel called in to say her concern is really about human connection.
She says we're already struggling with a loneliness epidemic, with disconnection.
People don't know how to talk to each other.
You put a screen in front of them.
It's only going to make matters worse.
I think it's one thing to use this technology for medical advancements, but other things that and other things that could potentially help people, but people need do not need to be attached to a screen 24/7.
And we're already seeing the drastic implications on this on the youth.
So down to our last minute or so, kids are they're not fully developed in their brains.
And you put this tech in front of them.
Are you worried they're going to be just as addictive as they are to their phones and more disconnected?
You and I are fully formed adults.
We can figure out how we want to regulate this.
I'm worried about a teenager.
>> I would argue, if we're fully formed, but.
>> Okay.
that's fair.
>> So, so because we all change throughout our lives and this isn't just a problem for the, for, for the teens, but it is a big problem.
It is a bigger problem for the children.
And adults are responsible for, for making sure the children's lives are healthy ones.
so if you're going to buy your, your, your kid a pair of glasses and it has an A.I.
assistant with a display in it by the software package that makes sure that it's that, that, that it manages their information in a way that's healthy for them.
It's not available today, but if you insist upon it before you put that thing on their head, that's a way to deal with it because they will be advantaged.
If they have have have a custom tutor available to them when they need it.
>> So Barry Silverstein, I've got pages of questions we didn't get to.
So you are welcome back here anytime because I these conversations are vital for our audience to understand what's coming, understand how to use it properly, understand maybe the right ways to think about these things.
I just want to thank you for being here.
Thanks for being in Rochester and for sharing this.
>> Thanks for the invitation and welcome.
Welcome to to come back.
I'm excited to come back and I hope people will support the Center for Extended Reality.
>> He's the director of the center for Extended Reality at the University of Rochester.
Brand new.
And they've got a lot planned here.
Barry Silverstein, thanks for being here.
Thank you from all of us at Connections.
Thanks for being with us on our various platforms.
We are back with you tomorrow on member supported public media.
>> This program is a production of WXXI Public Radio.
The views expressed do not necessarily represent those of this station.
Its staff, management or underwriters.
The broadcast is meant for the private use of our audience.
Any rebroadcast or use in another medium without expressed written consent of WXXI is strictly prohibited.
Connections with Evan Dawson is available as a podcast.
Just click on the Connections link at wxxinews.org.

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
Connections with Evan Dawson is a local public television program presented by WXXI