
Artificial Intelligence in the Workforce
Season 29 Episode 9 | 56m 35sVideo has Closed Captions
Join the City Club as we hear from experts in AI in the workforce.
Join the City Club as we hear from experts in AI in the workforce and how we can create and use AI technology to improve the social determinants of work and help eliminate barriers to success.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
The City Club Forum is a local public television program presented by Ideastream

Artificial Intelligence in the Workforce
Season 29 Episode 9 | 56m 35sVideo has Closed Captions
Join the City Club as we hear from experts in AI in the workforce and how we can create and use AI technology to improve the social determinants of work and help eliminate barriers to success.
Problems playing video? | Closed Captioning Feedback
How to Watch The City Club Forum
The City Club Forum is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipProduction and distribution of City Club forums and ideastream public media are made possible by PNC and the United Black Fund of Greater Cleveland, Inc.. Good afternoon.
Hello, everyone.
I am.
Welcome to the City Club of Cleveland.
We are devoted to conversations of consequence that help democracy thrive.
It is Thursday, February 8th.
I'm Jeff St. Clair, host and reporter at Ideastream Public Media and moderator for today's conversation.
We are at the start.
Maybe we're midway through a revolution in technology with the rise of artificial intelligence, a AI.
A AI is everywhere, and it is fundamentally changing our world in ways that are difficult to predict.
AI is already being used across many sectors, including workforce development and job training.
And the topic of today's program, which is the modern job hunt.
Today, we're going to look at how employers understand, evaluate and match workers with their ideal careers.
Using A.I..
Using personality and culture tests has been around for some time, and now workforce development apps have churned out complicated algorithms to ensure strong matches.
But does this technology account for external and internal bias, risk taking or willingness for big change?
Today, we're going to hear from experts in A.I.
on how we can create and use this technology to improve the social determinants of work and help eliminate barriers to success.
So joining me on stage is Bethany Friedlander, president and CEO at Newbridge.
Bethany, welcome.
Neal Bruce, who is chief product officer at Arena Analytics, and Anne Conn, President and CEO at the MacGregor Foundation.
For those of us, those of you streaming online, you can text questions.
And that's number is 3305415794.
So if you're streaming and you want to text a question or even sitting here 330541 5790 for the city club staff will try to get your questions for the second half of the program.
So members and friends of the City Club of Cleveland, please welcome join me in welcoming our guests today.
So I started this was a new topic to me.
I had really not thought about this.
I you know, I've been involved with job searches like everyone.
And you just do the old route.
You kind of send in a resume, you hope for the best, but our world has changed dramatically in terms of the job search.
This has been going on for a while.
Some of my research, I came across a quote from, you know, someone in this industry who said companies hiring pipelines are broken.
I want to ask all of you, you know what, maybe starting with Bethany, is this something you agree with and can you explain why they're broken?
So I'm going to get this mike wrong.
Is this loud enough?
You're okay.
So I think it's broken partly because our hospital partners are there's a firehose of applications.
This is the largest employing sector in the city of Cleveland.
People want to be working there.
However, it's incredibly soul sucking to put your resume out into the ecosphere and never get back any feedback and not know.
And so we definitely know from our students that they're only going to take those chances so many times.
Yeah, I agree.
I would say that I've been in the recruiting space, in the hiring space for about three decades, and I started ten years as a recruiter.
And then on the other side, the last 20 years, including a stint at Monster.
And what I found is people looking for work, they have a misunderstanding that if I apply to one job in a company, this this like magical person who runs around the entire organization trying to find them the right job as an internal advocate.
There is no magical sorting hat within the company to do that.
And I also find that applicants don't really know what's the art of the possible.
They often are fairly myopic in the way they look at jobs and technology hasn't helped them with that problem.
It hasn't historically helped them find more opportunities and even to know when companies are willing to invest in them, what is possible.
And so I feel like there's a real lack of communication for the applicant, there's a real lack of technological support for the organizations, and that results in kind of a really broken outcome.
And I would just add that for senior living, most people don't think of senior living as an option.
So because we are health care is one of the largest employers in Cleveland.
When you think about going into health care, you think of one of the big three and we have a really exciting organization that would be a good fit or in all of the partners in the Senior Living Collaborative are looking for talented individuals that want to be part of our mission, and they don't think of us because they're waiting for one of the big three.
Get back to them.
Mm hmm.
And I want to follow up with you a little bit about like this is one of the sectors we're looking at today specifically is seeing your people working in that industry in senior care.
Tell us a little bit about that.
Like how maybe if we're going to define the problem, how, you know, how big is it?
What is the the challenges that you're facing in finding qualified workers?
Sure.
So if you think back to 2020, at that time for every senior over age 65, there were eight caregivers to take care of one senior.
By the year 2050, there will be two for every senior, every person over age 65.
So just from a demographic shift, there aren't enough individuals to help provide care for the people who will need them.
So we're really trying to utilize this tool and we'll get into the details to find the right fit for individuals who want to be part of a larger mission, who have caring for other individuals.
We can teach and train the skill set, but we're really trying to find the right fit within our organization and industry.
You know, Bethany sent me some statistics.
You know, of what's going on in.
I guess a lot of industries have high turnover.
In fact, more than half of people who start working at a place are going to either be quit or fired, you know, within a certain period of time.
And that's a problem for employers because to hire person average, it cost over $4,000.
Just the process of doing that.
The I guess the analysis is that we can find solutions because employee turnover can be prevented.
And that's what Arena is providing some of those solutions.
But there are problems with AI too.
So this is a process everyone's working through and there's plenty of concern about AI in other sectors.
But I want to just put a quote here from Arena, which is if people don't know, if you haven't, you know, understood that this is a company has been that Neil works for, that provides solutions for people seeking to fill positions, talent and opportunity are organized in different ways.
Talent is randomly distributed opportunity clusters around privilege.
So basically there's a lot of people living in neighborhoods who don't have jobs.
The jobs are clustered around areas that, you know, have been seen investment over the years.
Right.
And this is sort of where your interest in an arena is.
Bethany, so describe how how you feel that we're going to, you know, maybe attack that problem between need and privilege.
So in our health care sector partnership work, our hospitals have clearly said that they have retention problems, particularly with entry level workers.
Additionally, there's a cost, but there's also a cost to patient care.
And we need to really think about this as a much more dire circumstance.
And so when I met with Arena, I thought, okay, well, this is great.
Well, plug it into the front of their application system and they're going to get this information, except that presumes that people know to find their way there.
And so if you don't know anybody who's ever worked in health care, if you personally actually have never been to the hospitals, you may never find your way to a Cleveland Clinic or a MacGregor or a university hospital, and therefore the I can't help you.
So I asked Arena a philosophical question, which is what if you pulled it off of the ATS and you put it in the middle of the of the community?
It's what's in the application tracking system.
So as opposed to having it working exclusively for that company, what if we went cross-sector?
What if we went across the city?
And so my idea was to address, yes, people being terminated, yes people leaving their jobs voluntarily, but also people who wanted to cross sectors but didn't know how to do that.
So I came across this statistic that 60% of entry level workers want to change sectors, but only 42% ever do.
So that's a huge lost opportunity, I think.
I believe that there are people who are today in manufacturing who really belong in health care.
And I believe that there are people in health care who really belong in manufacturing.
And I believe both of those things to be true.
Neil, I want to ask you, there's another part of this quote on the Arena website that I found that the mismatch that we're talking about is responsible for hardening class lines and slowing social mobility.
How do you think ARENA is structured to help solve some of those problems?
Yeah, I will say that there's technology has focused initially for like say the last 20 years in jobs around skills and academic credentials, which are easy to match.
But frankly, however incredibly disproportionate disadvantage outcome for people because frankly, most skills with a motivated person can be learned.
And lots of organizations are willing to train.
And so I think that, you know, we've been going down a path around looking for people who've already done what we want them to do, which frankly isn't very motivating.
Like you might want to hire people who haven't done everything you want them to do.
And I think slowly people are realizing that academic credentials, while useful, aren't always the best indicator.
So I think that, you know, a classic example of how you think about a person fitting a job.
There's motivation, there's so it's like, I want to do it.
There's Can I do it?
Which is the skills and stuff and will I fit.
And what we've really focused on is fit because when you can identify somebody, regardless of their race or gender or other things like that, if you can identify somebody who can fit and have that be the first cut, you're much less likely to cut out the people who are disadvantaged.
If you start with cuts around academics or skills, you're cutting out a whole bunch of people.
And when you're a company and you've got 100 applicants and you've got hundreds of jobs to fill, you don't have the time to go through it all.
So you want some technology to help you.
And so the question is, is how can we use technology that's going to not cut out some of the best people in the first round?
So what we're trying to do is say, you know, there are ways to identify short lists of people where you're not taking out people who maybe are the most motivated and maybe will stay the longest.
Because you've developed them into a new career and that makes them motivated to stay.
It's just a different way of thinking.
One of the things we're also doing that was mentioned is we're helping companies think of themselves as collaborators and not competitors.
When you when you buy a book on Amazon, it'll tell you five other books you might want to buy, but they're all from Amazon.
You know, what we're doing, we're starting to do in Cleveland is, you know, you can find other jobs within McGregor, but you can find other jobs in other similar organizations, those that are going to give you more bites at the apple, more chances to find the job.
That's right for you.
And so if organizations can see themselves more as collaborators, maybe all boats rise.
Companies do better.
People have better job opportunities.
You know, we're trying to figure out when wins.
So can I just interrupt for a second?
So I think we're asking employers to take a leap, which is that if and staffing is solid and retained and happy, that that's actually better for Cleveland and it's actually better for me as another employer, because I also have to believe that there are people who are finding their way to McGregor who actually don't fit there either.
So there is there is a leap that we're asking people to take, which is I will give up my applicant to get your applicant who's a better fit.
That's a little mind bending in a very tight labor market.
I'd.
Oh, go ahead.
Well, now, I was just going to add, I will give a shout out to one of our funding partners and we'll get into the details on this.
But it was described to me and I thought this was a great way to kind of wrap your mind around what we're trying to do here in Cleveland.
So McGregor and Judson and Jennings Center for Older Adults and Eliza Jennings are working together to create the senior living hallway with the Serena product.
The hospital partners hopefully are working to create a hospital hallway and ultimately with partnership with the manufacturing sector to create a manufacturing hallway.
But we're going to meet in a central lobby.
And I thought that was a great way if somebody is applying in manufacturing, but they're actually a better fit for someone in health care or at senior living that they find their way.
Marina helps them find their way to us, and I thought that was a great analogy or picture to help us really understand what we're trying to create here.
As with in Cleveland.
It's like a virtual job fair in some ways with all these different employers.
I want to give you a little bit more detail of what we're talking about.
You know, we've been throwing these terms around the arena software.
Yesterday, I filled out the questionnaire that they're using that McGregor and some of these other companies.
It's about 5 minutes, so several pages.
And it might surprise you the kind of questions that Arena is asking applicants.
So it starts off with some normal questions they're asking, like outside of your job, how many phone calls do make per week?
I don't know.
Really think about that.
I get bored quickly and I'm like, Check.
I'm not sure if that's going to land me a job or not.
I don't know.
I'm a morning person not.
I am comfortable dealing with sick people and handling blood or human waste.
Yeah, I don't know.
I mean, I would think that would be a criteria actually, for for those specific job skills you're looking for.
I am good at keeping secrets, you know, am I going to be honest about that?
Because I'm probably not.
You know, if you want to keep something secret, you don't tell me about it.
Because so and it's funny, when I was, you know, going over this this list with some of the, you know, my colleagues, they're like, you're you're going to lie about that?
No, we're just going to honestly say, I'm not good at keeping secrets.
But, Neal, this is how this software works.
We have, you know, plenty of other questions on here that really dig into a fitness or what would you call aptitude, right?
Yeah.
I would say it's are you going to fit into that?
And what we do, the way we we give predictions, the way the software works is you take the assessment and then we judge a person associated with a particular job in a particular location.
So somebody who might be likely to stay longer in a rural setting may not stay as long in the urban setting.
And so you might wonder, like, how do we know?
So what we do is we look at the who for a company.
We get three years past historical information about all of the people they've hired and how long they have stayed.
And then we get ongoing information.
So what happens is the machine learning understands for people in this location, in this job, if they answer this way, they stick or they don't.
And we have models.
We have a series of learning machine learning models, one for 30 days, 60 days, 90 days, hundred 83, 60 days.
I think after 360, things about your boss matter way more than your hiring experience.
So we really are focused on the first year, but is really interesting to help not only organizations but also applicants know like who's got a decent shot of sticking with a with a job because what we hear is companies are willing to invest.
But if these people just don't want to do the work, no matter how much we invest, then we're both wasting time.
So what we can do is we can give someone an assessment.
And then having already scanned, you know, we have been around for ten years, roughly.
We've got 3 million people who've taken the the assessment across 20 or 32,000 locations.
So more than just Cleveland.
And we can then scan in the local geography other jobs that they're likely to fit as well as we're talking about.
And that's really promising, if I know that I'm not just sending a cross my finger resume for a job, but that I've got a shot at really fitting in this organization.
And the company will know that I've got a shot, then I'm at least past the first cut.
Then I got a shot at really having a conversation.
And the key is, I mean, when I was a recruiter, the key was just get the right people.
Having that first interview.
Like once that happens, then all kinds of good things can happen.
But there are so many barriers to just getting that first interview.
We're trying to change the way that people look at applicants and have a broader view of what's possible.
I just want to list a couple more of these.
I am embarrassed by any mistakes that I make.
Yes, I guess I'll be honest about that.
I like to tell stories.
I thought that was an interesting question for a job application.
Is that something that and that, you know, I don't know how will this work for you?
How will this help you?
I can use an example on I like to tell stories.
Most of our work is relational, right?
You your you're living or you're working in people's homes.
So you are coming in and working with individuals every day.
So if you if they like to tell you stories or you like to share about your day, that's probably going to be a little easier in our setting than maybe manufacturing.
As an example.
I want to we're going to go to questions probably about 10 minutes or so, but I wanted to address some of the concerns that many people have about AI taking over the world.
And an interesting thing on Arena's website, there is a large section devoted to the ethics of AI, and again, knowledge meant that the possible damage a particular system might cause A.I.
functions more effectively in environments with finite boundaries and fixed rules.
So you have to define like what?
What ethical framework are you using in your your company?
There are plenty of examples of AI really screwing up that in this phase where we're in and it's a guarantee that all of us are going to be interacting with AI in different ways.
And we're going to have to trust that the system is going to be self-policing in some way.
Arena is getting out in front of that in some ways by putting together an ethical framework, harm can range from inconvenience to substantial economic and emotional damage for people using the system.
We are just beginning to understand both the potential strengths and weaknesses of our tools.
Can we talk a little bit about that, how we're going to mitigate that?
Yeah.
So I think there's a lot of risk in in turning over too much decision making to computers.
I will also say there's a lot of risk of having decisions made by humans.
You know, as much risk as there is of bias in computers, there's a ton of risk and bias of humans, too.
And so I think that we have to look at both things.
But from a computer perspective, one of the things that we've built is this whole bias mitigation engine.
It's a separate set of machine learning algorithms that is looking at.
It's asking the question, Can I determine is this a male?
Were a woman that we're evaluating?
Is this, you know, age, you know, veteran status?
So the key that our adversarial bias mitigation engines are looking for is, is there any way to determine information that might lead to bias before that the engine makes a prediction.
There are there are not that many AI companies who have figured out how to do that in an effective way.
And we can actually measure we've got reporting that shows that, you know, if you think of 50 out of 100 being zero bias, you know, we can say with or without our engine, maybe we're like 50, 54 with the engine we're with, the bias begins to turn on.
We're like 52.
The noise factor is kind of like a 53 or more so like a below that you would even know.
But humans are like 80, you know.
So like I think that we absolutely need to figure out how to help humans be better at bias, make sure that we're we're not training.
There's been tons of mistakes made around A.I.
training where like, if you just blindly put in a bunch of resumes, which again are going to be academic and skill and address location oriented things you're going to end up with, frankly, in lots of jobs in the U.S., a lot of white males that they're going to try to hire because they have historically done great, you know, and you have to train your models with the right data.
If you train models to reinforce past problems, you're going to get outcomes that are going to continue with those past problems.
And so I think looking at it differently is going to it'll be interesting to see how this works.
Yeah.
Bethany, I want to ask you, like, what made you decide to work with Arena and how does it help you?
So at New Bridge, we actually took a baby step into A.I.
when we started our cereal processing job.
So sterile processing was our first non patient facing role and we didn't know how to pick applicants.
So describe that job.
That is the job that sterilizes all of your tools that are going to be used in the surgery and actually inspects them all for slots so that they're going to work exactly as they should in the operating room.
Very important.
It is also a job that you mostly do solitary and the machine that actually cleans them is usually in the basement and it's hot.
And so we didn't know how.
To really selling it.
I know.
But so we we took a baby step and I and what we did was we built a profile based on our own sterile processor who's doing the training.
And our classes have gotten incrementally better each time.
But the problem is our model doesn't learn, it stays static.
So unless I change the profile and it's always going against that same profile, what I'm really intrigued by is the fact that Arina learns every single time it gets smarter and smarter and smarter because it's getting that higher data and that term data.
And so I'm just I think that's a fascinating way because again, I don't this is how I know that people don't necessarily know that much about themselves when.
So we have three training programs it is.
Almost applicants that you're that you're dealing with don't have that same sort of I don't know history life history that that has led them to be really good at getting jobs.
Or even to know what they should be doing.
So about 80% of my students have either cared for a family member with an illness or have been cared for.
So they know that they have the empathy and the compassion, but they don't know where they fit into the organizations.
And so and I know this because we've three very different roles that we train for, and most of my students apply to all three.
So there's no there's no discernment.
And so I just, I, I found Arina through a footnote of a 36 page paper and I said, They're never going to call me back.
I mean, I'm going to send an email.
But, you know, I'm some little tiny place and it's been the most amazing conversation.
So I think what's interesting in looking back at some of the missteps that I has seen in this industry, Amazon, for example, used resume screening software and they found that it was filled with bias, that the that it was not it was much worse, actually, than having humans do that screening.
They couldn't fix the API, they had to scrap the whole thing.
But you're saying Arena has the ability to continually improve it?
Yeah.
So there's there's two ways we're continuing continuously improving.
One is we are tracking over months and years how or how long are people staying based on their answers?
And if they're if the people who stay longer start answering differently than we are going to have those new answers, be the ones that are more likely.
So as things change.
But the point is, is it's all based on actual outcomes.
It's not guess it's lots of behavioral assessment work is based on.
We hope so.
But what we're doing is we're actually looking at who is staying in jobs and how did they answer those questions for that location and for that job.
And then we're matching that to the current people who are applying.
And so the the eye learns, the air is learning the other way.
It's learning.
It's continuously looking for can I tell you who this is or not?
And that's where the bias mitigation goes in.
So there's two competing engines working against each other to make sure, you know, basically it's we think this person is likely to stay.
And then the other engine is asking, can I tell who they are?
And if and if we can get a good outcome and we can't tell a lot of their personal information, that's a good outcome.
Like that's that's a go.
And so I think that what we need to think about is how I can be double checking itself and not just be running amok, which I think is historically and frankly, like I said, garbage in, garbage out.
If you put in a bunch of resumes, you're going to get people who've historically done well in jobs.
And we know that that's a subset of who should be getting jobs.
Yeah, that part of the research, too, I was finding that some of these AI systems, they really like applicants whose name is Jared and played lacrosse well.
I mean, there's a bias built into it.
Well, I'll just say real quick, I work I live in Boston.
I was a recruiter in Boston for many years.
And there was a study done in Boston.
And it really kind of broke my heart.
And I showed it to the recruiters where they took resumes and they randomly change names and town locations and and people who had white sounding names in white sounding towns.
It didn't matter what their skills were or what their education was.
They got interviewed more.
That is that was a I mean, you could look it up.
There was a study done probably 20 years ago.
I doubt things have changed much in the last 20 years.
We have a real problem with being able to look at people with fair eyes and we've got to get machines that help us do that.
One more question before we turn it over to the audience.
And I want to just wrap up with you that all this discussion about bias, but you how will ARENA improve your hiring?
Well, I think you said it best when we were doing some of our planning around.
We want to utilize AI where let's say computers do math fairly well.
Humans, not so much.
So we want to be able to utilize that as a tool, not as the final decision making tool.
So for us and speaking to the bias side from when a hiring manager is looking at the applications, they're going to evaluate how long have they stayed in positions?
That's probably one of the first things they did look at before they actually decide on who they're going to bring in for the interview because they don't want to go through this again in another three months.
So with this tool, the way that we're hopeful to continue to implement is that we're starting with the interview with the likely matches regardless of looking at the historical performance from the standpoint of what's on the resume, because that will expand who were interviewing, because right now we know that the hiring managers are only, you know, looking.
That's one of the first things that they look to no matter how much you train that that's one of our challenges.
So I think that's an opportunity for us.
We've been doing an earn and learn model for training staff, state tested nursing assistants and our sector, and we trained over 85 individuals.
But we want to make sure if folks are trained, are being trained, that they're staying in those roles and want to continue.
If we're going to again invest in individuals and help them to continue along that career path, to be great, to find folks who are a match in that way.
All right.
Let's begin the audience Q&A once again, just to reset for for our streamers, I'm Jeff Sinclair from Ideastream Public Media.
And today we are discussing how we can create and use artificial intelligence, A.I.
technology to improve the social determinants of work and help eliminate barriers to success.
We're talking with Bethany Friedlander, president and CEO of Newbridge Neil Bruce, chief product officer at Arena Analytics, and Ann Con, President, CEO of the MacGregor Foundation.
So we welcome questions from any everyone here City Club members, guests, students and those joining via our livestream at City Club Dawg.
Again, you can text us 3305415794 and anyone read okay.
We have our first question.
Both professional and legal standards require that people making business decisions and other decisions are supposed to know what their the resources are making those decisions on.
Traditionally, it's been if you hire a lawyer or an accountant, you better know you've hired somebody who's competent with a I.
The better the program you hire, the less you're going to know how it works, the data it uses.
So the less you're actually going to know how it came up with its conclusions.
You're saying, well, we're going to have all this after the fact testing, which could be years down the road, in which case you've had years of discrimination or just bad judgments.
How do you balance using AI, but at the same time fulfilling your obligations as professionals about knowing what you're recommending before you actually recommend it?
We're looking at you, Neil.
I get this one.
Great.
So yeah, you bring up a great questions.
And you're right, the the the more you go towards deep learning aspects of machine learning, the less clear.
The reason is why the machines pick the way they do.
That's just true.
What we do with our customers is we look at, you know, the pools of people that are being selected to be more likely to stay.
We also look at and we actually usually I mean, every time I've seen us do a contract, we do a guarantee that it works so that if you hire the people we say are likely, you will see lower turnover.
And we put our money where our mouth is, where if that doesn't happen, then you will get your money back.
So I think the question wasn't that the question wasn't does it work?
I think the question was, is it fair?
And I think, you know, we're constantly trying to understand how we can deliver.
We're looking at how our my my expectation is balancing out fairness and looking at populations and seeing are we creating a a more robust applicant pool.
But I think that this is also something that is going to be continue to evolve.
So I don't have a definitive out answer for like yes in all cases.
I think this is something that's going to evolve.
I know if that answers your question, kind of, yeah, that's probably the best I could do is kind of.
I think the other piece is this isn't you're not selecting the candidate.
It's really just expanding the pool.
So it's still up to us as employers and clinic clinical leaders to make sure that the people we're hiring have the skill set that they need, especially when you are a licensed like an accountant.
Being a former accountant myself, I would say the same, right?
You want to make sure you're hiring somebody that's technically competent, but if you never get to the interview, then you miss out on a whole number of people that could have been technically complement but technically competent but not eligible for an interview.
So I think that's where we're trying to solve maybe a little different challenge.
It's a black box, though.
I mean.
There is definitely a black box element to it.
I mean, like one thing that I think will be would be and I don't know that this has been done yet, but like I would love to see, you know, a computer selection short list of applicants that has a real bias mitigation approach head to head against a bunch of humans and who's going to be more biased?
I think that's a really easy answer.
Who's going to win that?
You know?
And so I think that what we're doing now is probably in a lot of ways worse than embracing technology that doesn't really care if you're white or black or a man or a woman.
It just cares about, like, are you likely to perform, you know?
And so maybe that's a better approach, but yeah, to be determined.
Next question.
Okay.
Our next question is a text question.
It says, how can this type of AI help high school students determine academic tracks in high school STEM focus schools and our schools, or find a college or a major?
Bethany, are you you think I'm throwing it to you?
So I'm going to go back to Kelly.
So Kelly has told me that this is much.
Better from Arina.
Yes I, I this is quite funny that I'm answering this question because I'm not the right person but that, that it's not about helping you with an educational pathway.
It's about helping you and a fit for a particular job.
And those are really two different things.
And so again, what we're talking about here are mostly non credentialed positions.
So it's really the fit is the most important piece.
Once you get to the credentials, I'm out.
I know.
Right?
That's a decision that's already been made.
I think there are two things that could help.
I mean, right now we're focusing more on people who are not in high school.
But I think similar models apply, which is do I even know what's possible?
Is the first question and usually answer.
That is no.
And there's a lot more skill transfer ability than people imagine, to be honest.
So there's a art of possible problem and then there's a, I'll call it helping hand problem, like companies are willing to invest if they think these people will stay because they've got too many jobs open.
And so if you can combine those two, the art of the possible and understanding where you're going to be able to get help, it exponentially opens more doors than what people perceive today.
And that's that's what we're trying to help with.
I just want to do want to take a look at the challenges that we are facing.
And this is something that Bethany sent me, that there are 53 million Americans who make under $18,000 a year, that there are a generation of people.
Most young workers are stuck or spinning their wheels.
So and they find that these low wage earners have a no clear path to higher wages.
So this is hopefully I might help find people find that path.
Right.
So we didn't about this, but part of my whole hypothesis is that people will persist more in places where they feel seen and heard and a sense of belonging and a meaningfulness in the work.
And we want people to persist because the only way we're going to move people into family sustaining wages is if they stay long enough to take advantage of the training that's offered to them or the apprenticeship or the tuition reimbursement.
And so I'm very focused on resiliency and persistence, and I think fit is deeply connected to that.
Great.
Next question.
Now some of life's little scattering.
Good afternoon.
My question is directed to to Neal.
Obviously, the conversation is still a tourist, a talent.
Right.
But is there any mechanism that's put in place to focus on the training aspect?
Right.
You talk about retention, right?
Why are people walking away from these jobs?
Not just people that are staying well, how are you incorporating or utilizing AI to analyze why are people leaving?
Right.
And focus on I think and has kind of spoke about the trainee for as I stay in everything how can you maximize to make it more effective for individuals to want to stay.
So how are you rolling in or implement away from that perspective?
You know, one of the things that we're actively working on is not just helping with people initially joining a company, but also if you think about once you're in a company for a few years, you might want to look for another job within the same company.
And so that's classically known as internal mobility, and it's got a lot of the same problems.
In some ways it's fraught with more issues because like if you don't get that job, does everyone look down on you?
Or like there's a lot of emotional political stuff and so a lot of people just don't want to be bothered or they just go somewhere else because they like it's easier to get a job.
I mean, I'm sure a lot of people here would agree it's easier to get a job at another company than to move within your own company for lots of silly reasons.
And so I think that, you know, I mentioned there's two issues.
There's the art of the possible problem where what could happen.
But just as important is, am I just one skill away from a new kind of career and would I get help to build that skill?
That's an incredible leverage opportunity.
People, individuals don't understand.
And companies are terrible at explaining.
Most companies, they give you a list of internal jobs and say, good luck.
That's not that's not helpful.
We could do better.
And I just want to be clear, Maureen, is this tiny little startup.
We're excited.
We're mighty.
But like, we're not going to solve all these problems by ourselves.
So, like, I don't want to try to represent that, like, we're going to solve it all, but like, I think we can be part of the solution next question.
Hi, I'm Patrice Blakemore.
I'm the Senior Vice President of Equity and Inclusion at the Greater Cleveland Partnership.
One of our strategies is around inclusive opportunity and we focus on increasing diversity in leadership positions.
So the middle level and senior level positions, when you look at the outcome and the the analytics of the AI that you're using, have you seen a difference in terms of the racial makeup of those employees who are in those senior and middle level positions?
Yeah.
So two thoughts.
One, we aren't giving the assessment for jobs at the senior level because often those are contractual deals.
It's like a whole different game.
I think that there's a long play which we haven't figured out what the outcome is yet because it hasn't has been enough time.
But there's two ways people get into senior roles.
Often they're pulled in at that level, and that's a different kind of problem to solve.
What we're trying to do, to small extent, is for the people who do come up from within the organization.
So the bottoms up approach, making that first cut be more fair so that we're going to be broader in the way we think about who could be possible for that job and go away from I mean, it's a little crazy that a lot of jobs are like, well, exactly everything we've done, we wanted you to do for ten years, of course, this is what you want to do for your next ten years.
You know, that's what the skill matching game is about, and that's maybe the wrong way to think.
So I think that if we can, we can.
I mean, people need help shortlisting applications.
The question is, is how do we do it in the fairest way possible?
So this is not a great answer, but I'm hopeful that as people with more opportunity and more diverse populations get considered, they will move up in the ranks over time.
We're probably not going to be the fast track to solve for that.
I think there are different kinds of technologies and tools for executives that could make a better, faster path to that.
But part of that answer is about persistence.
I mean, I'm thinking about a board member who I have who stayed with the same company for 20 some years and has a master's degree that was all done through tuition reimbursement.
And again, that persistence is absolutely tied to fit.
Yeah, Bethany has some words for that.
Rikki, a Greek word for approaching things with passion, with your whole heart.
And in Sisu you finish now, okay?
And somehow she knows finish the word sisu, meaning grit, bravery and strength, making an extraordinary effort.
And I've always felt that too, is like, if you keep trying, you'll get where you need to be in some ways.
But still, there is a huge concern about a generation in the young people.
The question about the high schoolers trying to break into this finding it very daunting entering the workforce.
But can I just say that part of resiliency and persistence is we have to believe that people are entitled to feel great about the work they're doing regardless of the level that they're currently at.
And I think we in workforce sometimes we have a little bit of a widget mentality and we want to put people, put people in where we think that they need to be or where we can get them to be at work quickly.
But that that's not necessarily the right thing to do.
And I think sometimes there's a an elitist view that that kind of passion or that kind of work is not for everybody.
And I really think that it is.
I think everybody wants to find fulfilling work, though.
I think that's, you know, a driving motivation for everyone.
Next question.
Good afternoon.
My question is for Neil.
Are you using the AEI expert to work hand in hand with the developers on the algorithm to ensure that the types of questions that are being asked and evolving, even with those that have secured a job and are moving on to advancement.
How are you ensuring that the type of questions that are being asked and the algorithm that goes along with it is in fact nondiscriminatory and staying that way as the system learns.
Yeah, it's a great question.
So we've we've used an outside ethics board to help us wrestle with these kinds of questions so we don't do it all in-house.
We're trying to use outsiders to help help make those decisions, help us figure that out.
I will say that our our bias mitigation techniques are helping us understand.
Are any of those questions going to also lead to bias?
So I think it's a combination of human intervention and really what the outcome data shows with how I mean, a key question when the system is making prediction is can we tell much about who they are?
And the less we can tell about these areas of possible discrimination is the less we we can determine that, the better.
Because then we're really judging the person and we're not judging these attributes.
But yeah, we've, we've used outside people to help us try to figure out how to right questions.
And frankly, it's best to not move the questions all the time because by having stable questions, you get better long term data on the efficacy of those questions.
Good afternoon, Rebecca Kushner with our Fair Workforce in the Ohio Workforce Coalition.
I first want to say we ought to be careful and not assume that everyone making less than $18,000 is making less than 18,000 a year because of access to information or training.
But there's a whole bunch of other reasons, so I want that on the record.
But my question is how how do you account for the fact that we know there are employer practices that reinforce bias, that reinforce poor working conditions, that, you know, there's a number of job quality issues.
How do you, I guess, use the information you're gathering either to coach the employers or at least be aware that you're not reinforcing poor practices by matching people to jobs that are bad for me.
So yeah, so humans have big problems like judging people fairly.
That's just a truism.
And this is a big thing.
Way bigger than arena, way bigger than anyone in this room.
I do think, and you touched on it, that it's eye opening for a management team to look at the pool that we show versus the pool they may have picked on their own.
I mean, that is eye opening.
To think differently about how you would make your first cut of applicants?
I think that there's tons of things in the way we work that exclude.
And, you know, I think that, you know, we're going to have to figure this out.
Computers are not going away.
Humans hopefully are not going away.
We've got to figure out how to work so that we can let the humans do what humans do best, which is, frankly, things like empathy and and envision and let the computers do stuff that they're better at, which is typically things like math and work that gets done over and over again.
And so I think that that's going to become the art of the future of work is like let the humans do things at their best for as long as they're still best at it, and let the computers do the stuff that we're not as good at, you know, and we're not always as good at making a cut off of the art of the possible.
We're just not.
I mean, historically we haven't been.
So like, I think it'll be interesting to see how we, we evolve.
But Neal, isn't it awesome an invitation to an employer to look at those practices if they're not getting as many.
Hi.
Like these.
And one of the beautiful things about the community based model is that you can compare numbers, right?
And so I think in I would see this as an invitation for them to be reflective on their own practices and then think about the high likely the number of high likelihood as an indicator that you're moving in the right direction.
I would hope.
I mean, I think that companies are going to have to have a lot bigger conversation than we're going to be able to have with them to change bad behavior.
You know, like that's a much bigger problem than I think we can solve for them.
So I think if people are serious about this, then there's a lot of things you can do to get better at hiring.
Hello again.
Our next question is a text question.
If A.I.
is going to help companies hire based on fear and potential, how would that change?
Our company share pros skill development once workers are on the job?
That's a great question.
Yeah.
So I think that I think that people are much more malleable than we often give them credit for and they're much more able to learn new skills than we give them credit for.
In transferable skills.
Are much more interesting than we give people the opportunity for for that.
And so I think that I think just in time training versus kind of everyone's in a room assuming everyone needs it at the same time.
I think there's a lot of technologies that are going to help us actually learn as we need.
I mean, we're not quite at The Matrix where I could put on a hat and I can learn to the helicopter, but like we are going towards that direction of just in time training.
And I think this idea of using tools to help us train and look at humans is, is, is entities of potential instead of fixed back, you know, fixed in a box for the rest of their life.
I think this is really going to change the way we about humans working.
So I have a very selfish reason to want to be up here, right?
So I want there to be retention in the workplace because it saves employers thousands, perhaps millions of dollars.
And then that is an enormous opportunity for them to invest them in their workforce.
And so that's why I would do this, why I you know, so to me, that's the answer, right?
You pour those savings back into the person.
Well, and I could just add on that from the employer side.
I mean, one of the great parts about this conversation is that from many of you don't know that Bethany also leads the Central School of Nursing.
And because of the work we're doing, we can in the Earn and learn model we've already established, we can now provide one for an LPN so we can pay them part of the time while they're going to school.
And in addition for working as an STK, but it gives some a little bit more breadth because it is hard to work full time and go to school and take care of a family.
That's all a challenging.
Sometimes those hurdles are things that never people don't really get over.
And so this is a way for us to reinvest those dollars that we're not having to spend in training, you know, 100 new people because we're hiring a better fit.
And it's not that it's a better fit for that.
That person is not the right fit.
It's just that that job and that person.
So it's I think that's the other piece that we're trying to to adapt to.
Yeah, I want to emphasize that we're not trying to say a person is likely.
We're saying a person is likely with this combination of job and location, they might be unlikely for one and very likely for another.
And unlikely for a job at your organization and very likely an adjacent job at another organization.
And that's where the magic really happens, where we're expanding opportunity.
Hey, I want a job.
I'm going to put all the right keywords on my resume.
I'm going to Google the right answers to the personality test.
How do we stop people from gaming the system and trying to answer the way?
I think you want me to answer the way the computer wants me to answer because computers can more likely be tricked, perhaps than than human humans have the ability to to have have intuitions and things like that.
And computers are just going off the data and I'm afraid it can be tricked.
Yeah, there was an example where people were having a chat, write their resume that went to an AI sorter.
So it was these two computers talking to each other to get the job.
Yeah, I'll say humans can be tricked, not just computers.
I do really think this game that we're going to be playing in the workforce in the next 20 years is like figuring out what it is to be human and leaning in on that and figuring out what we should let go of and letting computers do the rest.
I think that's.
A substantial question.
I mean, I just think that, you know, we're we're not we're not going to stop the leverage that technology gives us.
And so the question is, is how can we use it in the best way possible?
I do think that people try to game everything.
And, you know, sometimes that works, I guess is the short answer.
And also, I would like to add that some of these questions, the one of here's any job is better than no job at all.
I don't know how to respond to that.
I, I guess I would.
Say it's a job.
I mean, personally, if I had I've had jobs where I was asked to do things that I ethically wasn't willing to do.
And I walked away.
And in that case, no job was better than a job.
So I think that there were times for me personally, I would say no job is better than job.
And then I just find a job that I don't get asked to do things I'm not ethically opposed to.
And I do it in a just mention to.
Just because a job opportunity comes up as high likely, does that mean you are required to apply for that job?
It is absolutely your right to continue to have a dream of a particular employer or particular role, even if it didn't come up as highly likely.
And there are people who don't know certain things about themselves, but there's also people who do know certain things about themselves and do have a sense of where they belong.
So last question.
Oh, where to go?
Oh, sorry.
Okay, that's it then.
What a great conversation.
Okay, here we go.
Thank you, Bethany, Neal and Ann for joining us.
Forums like this are made possible thanks to generous support from individuals.
You can learn more about how to become a guardian of free speech at City Club Dawg.
Today's forum is part of the City Club's Workforce Development series in partnership with the Deaconess Foundation.
Thank you, Kathy Belk, for her support.
Kathy Bell Care and join join me in welcoming our students who are from Brookside High School.
Brookside M.C.
Squared STEM High School as well.
Thank you.
Our guests hosted by the Deaconess Foundation and Greater Cleveland Partnership, the Northeast Ohio Regional Sewer District Towards Employment and the Work Room Program Alliance.
Coming up on to the club on Wednesday, Valentine's Day, February 14th, the City Club welcome welcomes Felton Thomas, CEO of the Cleveland Public Library, to talk about whether or not libraries can or even should be everything to everyone.
You can learn more about this forum and others at City Club, dawg.
That brings us to the end of today's forum.
Thank you.
Once again, our guest, Bethany Friedlander, Neal Bruce.
And and you're listening.
I'm sorry, you can't and can't write.
I'm sorry.
A little bit confusing.
Thank you guys so much.
I really appreciate everything.
It's a great conversation.
For information on upcoming speakers or for podcasts of the City Club, go to City Club, dawg.
Production and distribution of City Club forums and Ideastream Public Media are made possible by PNC and the United Black Fund of Greater Cleveland, Inc..

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
The City Club Forum is a local public television program presented by Ideastream