At Howard
Sam Altman – OpenAI/ChatGPT
Season 11 Episode 2 | 52m 19sVideo has Closed Captions
A conversation on artificial intelligence with OpenAI co-founder and CEO Sam Altman.
Howard University is honored to host an interactive conversation on artificial intelligence moderated by President Ben Vinson III, in discussion with OpenAI co-founder and CEO Sam Altman.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
At Howard is a local public television program presented by WHUT
At Howard
Sam Altman – OpenAI/ChatGPT
Season 11 Episode 2 | 52m 19sVideo has Closed Captions
Howard University is honored to host an interactive conversation on artificial intelligence moderated by President Ben Vinson III, in discussion with OpenAI co-founder and CEO Sam Altman.
Problems playing video? | Closed Captioning Feedback
How to Watch At Howard
At Howard is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship>>Hello, I'm Dr. Ben Vinson III, the 18th President of Howard University.
And it is my pleasure to welcome you to this program, one of many we plan to bring to you as part of the "@Howard" series.
Howard University has the distinct pleasure of being the only HBCU to hold the license of a public television station across the country.
This special relationship allows WHUT to have unique access to the breadth and depth of academic content that is being produced on our campus.
From stimulating lecture series.
And panel discussions on a wide range of topics.
To one on one conversations with captains of industry and international leaders of business, politics, and the arts.
From time to time, WHUT will broadcast some of that content in the form of full programs to short excerpts that we believe will surely stimulate and engage you.
So, sit back and enjoy.
We're proud to share with you some of what makes Howard University so special.
♪♪ ♪♪ ♪♪ ♪♪ ♪♪ >>It is certainly a pleasure to see so many of our students, faculty, and staff here for this important conversation.
I am Dr. Anthony K. Wutoh, and I currently serve as the Provost and Chief Academic Officer here at Howard University, and it is certainly an honor to welcome each of you here and for the university to host Mr. Sam Altman, the CEO of OpenAI.
Um, like many of you... Our earliest introductions to artificial intelligence may have been through "The Matrix" or "Terminator" or "Artificial Intelligence," the movie.
But the importance of artificial intelligence really hit the public scene last year with the introduction of ChatGPT.
And this is certainly an important conversation for us at Howard, as we are investing significantly in artificial intelligence and machine learning and the training and education of our students, and also its application for us as an institution.
Howard is committed to leading on AI with the highest ethical standards.
The university recognizes that AI research can have significant impacts on society, and as such, we take seriously our responsibility with a special emphasis on diversity and inclusivity.
We have started a number of initiatives, including our Center for Applied Data Science and Analytics, and recently initiated our master's degree in Applied Data Science and Analytics this past fall as an example.
Um, at Howard, we like to say that social justice is incorporated within the DNA of the institution and is incorporated in each of our academic programs, whether our students are majoring in engineering, computer science, medicine, social work, divinity.
And we understand the importance and the ubiquity that this technology is going to continue to have moving forward.
And so, it certainly is an honor and a pleasure for us to host this very important conversation, and we will proceed with the program.
I would like to next introduce, and welcome Dr. Ba-Shen Welch, who is the Senior Advisor for the AI Ethics Council and happens to be a Howard alum.
And so, if you can join me in welcoming Dr. Welch and her comments.
[ Applause ] >>Thank you, Provost.
Good morning everyone.
Good morning, Howard University.
I am Ba-Shen French, professionally, Ba-Shen Welch, and I'm a Senior Advisor for the AI Ethics Council powered by Operation HOPE.
But importantly, as the provost has mentioned, I'm a double alumna of Howard University.
[ Cheers and applause ] Yes.
Home of the black intelligentsia.
Right?
The standard of excellence, both near and far, both domestically and abroad.
I'm a proud bison.
I love Howard University.
OpenAI was founded under the mantra to ensure artificial general intelligence benefits all humanity.
To this end, Mr. Sam Altman embarked upon an international listening tour of sorts.
In May of 2023, he partnered with global entrepreneur and advocate for marginalized communities, John Bryant of Operation HOPE.
During this tour, the first stop, befittingly, was at an HBCU, Clark Atlanta University and the AU Center.
There was a 2-day session of meetings and listening from members of the international community, leaders, faculty, and students who shared their ideas and their concepts related to ethics in AI.
From there, it was birthed a council.
Operation HOPE is partnering with the HBCU community to assemble presidents, civil rights leaders, technology and business industry ground breakers, clergy, government officials, and ethicists who convened a council to discuss AI and to advise as it relates to AI.
Mr. Sam Altman and Mr. John Bryant are co-chairs of this council.
Historically, black colleges and universities are playing an important role, and for that we should applaud.
[ Applause ] Now, on to our panelists for the Fireside Chat.
Our president, President Vinson, Dr. Ben Vinson III is the 18th President of Howard University and a tenured professor of history in the university's College of Arts and Sciences.
As president, he is tasked with inspiring and leading the Howard community made up of undergraduate and graduate students, faculty, and staff.
Dr. Vinson earned his bachelor's degrees in history and classical studies from Dartmouth College, and a doctorate in Latin American history from Columbia University.
He has been awarded fellowships from the Fulbright Commission, the National Humanities Center, Social Science Research Council at University of North Carolina at Chapel Hill, the Ford, Rockefeller, and Mellon Foundations.
Mr. president.
[ Cheers and applause ] Mr. Sam Altman is the co-founder and CEO of OpenAI, the AI research and deployment company behind ChatGPT.
Mr. Altman was president of the early stage startup accelerator Y Combinator from 2014 to 2019.
In 2015, Sam co-founded OpenAI with the mission to build general purpose artificial intelligence that benefits all of humanity.
He, with his Midwest upbringing, he brings the whole of himself, always having been an advocate for marginalized populations from his high school days and his advocacy there.
Mr. Sam Altman.
[ Applause ] Dr. William Sutherland.
Dr. William Sutherland is professor of biochemistry at the Howard University College of Medicine.
Additionally, he serves as a Director of Howard University's Center for Computational Biology and Bioinformatics.
He is the Interim Director of the Howard University Center for Applied Data Science and Analytics, and the principal investigator of the Howard University Research Centers and Minority Institutions Program.
His research includes utilization of data science principles to better understand differing chronic disease burdens, as it affects different ethnic groups in the Washington, D.C. Area, Dr. Sutherland.
[ Applause ] >>Great.
So, why don't we go ahead and get started?
Again, it's a delight to have you here on our campus.
Um, there has been so much moving so quickly in AI.
It is absolutely mesmerizing.
And you've been at the, you know, the front seat of a lot of this change.
Um, one of the questions that as an institution that's committed to social justice, to ethics, to responsibility, and we're an institution that believes we have a strong place in the future of AI.
One of the questions we'd like to ask is, how can we ensure with so much happening so quickly, the ethical considerations are prioritized?
And what role should industry leaders such as yourself and some of your colleagues, what role should leaders play in making sure that we have responsible practices?
>>You know, the way that I like to talk about this, to say that artificial intelligence can be the greatest technological revolution, the greatest new tool, the greatest engine for economic growth the world has had.
But it's a can, not a will.
The technology clearly is amazing.
But to get deployed well, it will take the partnership and integration with all of society and making sure that this is done in an equitable way, that lifts everybody up and the technology actually lends itself, I think, quite well to that.
Um, but we can't do it on our own.
And so, figuring out, one of the reasons that we love to come talk to people outside of Silicon Valley and do trips around the world and visit universities and talk to people from very different backgrounds, different industries, very different goals for what they'd like AI to do is ChatGPT is now a very widely-used product.
It's got to serve a lot of people, and it's got to do that in an inclusive way, a fair way, and in a way that sort of brings... the best of this technology to everyone.
There are a lot of very difficult questions.
Um, when we had the first GPTs trained, this is back in the, you know, GPT-1 and -2 days.
We looked at that, and we said, "Well, we can train this great thing, but you train something on the internet, you get a very biased system."
Um, what's it going to take to address that, to remedy that?
And at the time we had some ideas.
One of them, called reinforcement learning from human feedback, worked way better than we thought it was going to uh, I won't say debias the system, because I kind of believe no two people ever agree that any one system can be debiased, but to give us tools to significantly impact the behavior of a system.
And if you look at how GPT-4 does on any bias test you'd like to throw at it, um, versus earlier versions, we've clearly made a lot of progress.
But now, we're at the even harder question.
Um, which is who decides what the behavior of these systems should be?
What does that process look like?
How do you get democratic input from the whole world for a system that's going to impact the whole world?
How do you make sure that marginalized voices that don't always get heard are heard loudly and... and not only like what is the default behavior of the system, but how much is someone allowed to push it here?
How do you make sure that users giving the input about their own value preferences are taking into account this broader picture of the world?
And for that, we need a lot of collaboration.
We're launching some new stuff next week about democratic inputs, but we're very excited that the technology allows this.
And now, we get to face the harder societal problem of making sure it happens.
>>Well, thank you.
I'm going to pass it over to my colleague, uh, Dr. Sutherland.
>>Sure, you know, one of the things that, um, I've had a conversation with colleagues about is that, um, to what impact do you feel of having marginalized input at the development stage when you're developing aspects of AI or ChatGPT, if at that level, you had, you know, a diversity at that level, how much would that contribute to addressing some of the bias issues?
>>Uh, critical, both at the level of people, you know, coming up with the engineering ideas, also making sure that the people who are, uh, writing the specifications of how the system should behave, and then, the people who are, um, answering the questions, providing that human feedback on top of it, uh, to make sure that, you know, we have a representative sample there.
I think, it's important at all levels.
Um, lots of reasons I'm excited to be here.
I always love speaking at universities, but one of them is like, please apply to OpenAI.
Um, you know, we'd love to recruit you.
>>That's good.
That's good.
That's good.
[ Applause ] >>Well, if I can push on this a little bit more, because I see lots of students were smiling and reacting.
We are at a university, and we are at Howard University.
We have incredible talent here.
Um, what advice do you have for our students who may want to get in to the industry, may work for, uh, for OpenAI, work for other companies.
What is it that they can do, uh, to kind of prepare?
>>Um, just before this, I got to record the WHUT podcast, a little segment for it.
And I'll say the thing I said there, because I was really happy with the answer.
I think this is -- I think this is probably the greatest time, um, at least since the internet, to be graduating, to be, you know, a young person, if you're interested in entering the technology industry, this is a very special opportunity.
Uh, that probably won't come along again for a while.
You all got very, very lucky.
Um.
[ Applause ] But it's at the birth of a new industry and at a time of tremendous change, when young people have the most advantage and the most opportunity.
You all are way more familiar with AI tools than people older than you.
You bring a new set of fresh perspectives about, not only, how to do existing things, but how to -- what can be created now, what just wasn't possible before this.
There's a reason that I think young people drive a lot of the technological revolutions.
You know, in my previous job at Y Combinator, this was obvious to us, but it's a really big deal, and it's a really special opportunity.
Uh, in addition to that perspective, you also will be entering, uh, entering your careers at a time of unbelievable change and turmoil.
And, you know, that's when the advantage accrues to people who are just starting out, uh, or earlier on.
That's awesome.
Like, when the ground is shaking is when all of the rules change and when the existing sort of power structures in order are under threat or weakened or whatever, and you have an opportunity to just come start something entirely new, that can be a new company, that can be a new kind of creative work, that can just be like doing a job at an existing company in a way that people 10 or 20 years older than you won't be as fluent at and won't have the creative spark for.
So, um, this is the first time, uh, I have ever missed being an investor.
I don't miss it that much, because making AI is much more fun.
But the opportunities to start new companies and the new set of companies that will be birthed right at the beginning of a new technological revolution.
This is a period, like the beginning of the internet revolution, when, you know, Amazon and Google and others were started, or, like, the mobile revolution when all those companies were started, that the big new companies get started right at the beginning of a new massive technological shift.
>>And if I can just piggyback on that too.
Are those opportunities equal at this particular fertile moment for students from underrepresented backgrounds?
What's your take on that?
>>Yeah.
Um.
One of the many reasons, by the way, on their earlier answer, I forgot to say, again, you also can come have a very exciting impact at OpenAI.
Um, but, uh... You know, one of the things that's been exciting to us about this technological, this particular revolution, is the degree to which underrepresented communities have embraced it, are leading the charge with the new tools, are building new products and services with the new tools.
And we're very optimistic from what we're seeing so far that this will be a much more representative new step.
>>You know, I like, just, uh, piggyback for a moment on what you said, to your answer to the previous question.
And that is, as the Provost indicated, we just launched a master's program in data science here.
And the emphasis of that, what we set it up is that we want to make sure that we do an excellent job in grounding the students, uh, the graduates in the technical aspects of what they need to know and what they need, the tools they'll need to use.
But in addition to that, um, I'd like to get you to comment on this.
In addition to that, we'd like to do what we refer to as future proofing their career.
>>Yeah.
>>And by that, I mean, uh, give them the creative thinking and critical skills, critical thinking skills that will be needed, uh, no matter what the tool is.
Because tools change, they evolve.
But those, uh, innate, creative, critical thinking skills will always be needed.
And that's the way of future proofing your career.
So, what do you think about that?
>>Strong agree with that.
Uh, I think critical thinking, creativity, the ability to figure out what other people want, the ability to have new ideas, that, in some sense that'll be the most valuable skill of the future.
If you think of a world where every one of us has a whole company worth of AI assistants that are doing tasks for us to help us express our vision and make things for other people and make these new things in the world.
The most important thing, then, will be... the quality of the ideas, the curation of the ideas, because AI can generate lots of great ideas, but you still need a human there to say this is the thing other people want.
And also humans, I think, really care about the human behind something.
So, when I read a book, a book that I really loved, the first thing I want to do is go read about that author.
And if an AI wrote that book, I think, I'll somehow connect to it, much less.
Same thing when I look at a great piece of art, or if I am using some company's product, I want to know about the people that created that.
So, I think in both directions of humans knowing what other humans want, and also humans caring about the humans behind something, um, this will be -- that'll be a super important skill.
Uh, and so, I think, learning that ability to create, come up with new ideas, choose ideas from among the many options presented by an AI.
Uh, that'll be very valuable, I agree with you the tools will change, but I also think familiarity with the tools of today, and this new way of using computers is really important.
That'll be important for everyone, not just the tool builders, but everybody.
Like, in the same way that if you can't use a mobile phone, you're kind of at a huge disadvantage, but they're not that hard to use and people learn.
But the earlier in your career, you got familiar with it, the earlier in life, the better.
You know, everybody in this room was familiar with it, probably, as long as you can remember.
But I remember watching older people struggle with getting comfortable with the phone for the first time, as intuitive as I thought they were.
Uh.
I think...
I think human adaptability is remarkable.
And so, I'm very happy that people no longer think it's weird or impressive that we can talk to a computer, like, we talk to a human, and it understands us and it talks back to us and it does things for us.
But 2 years ago, almost no one believed that was going to be possible anytime soon.
You know, 2 years ago, what happens now with using ChatGPT was the stuff of sci-fi at best.
And if you told the world this was going to be part of people's daily lives 2 years later, I think, they would have said, "Of course not.
You know, that's a Hollywood thing."
And this is a significant change the world has just gone through.
Um, I think this is probably... Well, certainly this is the most significant change to how we use computers since the touch screens on mobile phones.
But I think it'll probably be much, much bigger than that.
You'll be able to just tell a computer, like, you would tell a friend or an employee, I need this thing to happen, or, what do you think about this?
Or, can you help me out with this?
Or, how do you think about this?
And it'll just do it for increasingly complex definitions of it.
You know, right now it can maybe like write some code for you, edit a paper for you, uh, you know, help you analyze things.
But someday it'll write a whole program for you, uh, do a whole research project for you, help you come up with new ideas.
Someday.
Not in the far future.
So, I think, it's a very big deal.
>>Yeah, if I could just follow up, uh, um, last week, I was at a international conference on bio computing and discussions of ChatGPT occupied a fair amount of discussion at that -- at that, uh, meeting.
And there came a time when someone asked a question, relative to ChatGPT, where are we as a society?
And almost to a person, it was believed that we are at a transformative time.
We are at a transformative time, not necessarily because of the technical genius that's in ChatGPT, but it's transformative, because it's so easy for everyone to use.
Whether you're a STEM person or a humanities person, you're a housewife, whether you're a middle schooler.
And so, I'll follow up with you on that is that it is seen at least the people at that conference, they were all professional research investigators, and they were, to the person, said this, "We are at a transformative moment."
Almost like when the internet was introduced.
Would you want to comment on that?
>>So, I'll answer it in two parts.
First of all, I agree about the magnitude of this transformation, because what is happening is... we are going from a world in which intelligence is limited and expensive to abundant and cheap.
And if you think about how much any of you could do, if you had a massive amount of cognitive labor at your disposal to build the ideas you want to see happen, to be useful to other people, to provide services and advice?
Um, you know, right now you can hire people, and you can coordinate them.
And it's kind of difficult and very expensive.
And most people in the world cannot afford nearly as much.
Let's call it cognitive service as they'd like.
Um, you know, not many people can afford great lawyers, for example, that's a very specialized, very expensive kind of cognitive service.
If the cost of that, the availability of that comes down by a factor of 100 or a factor of 10,000, and not just for legal advice, because I don't think anyone needs like lots more legal back and forth.
But for all the stuff we do want, great entertainment, great products and services, everything else, great education, great medical care.
Uh, that is a profound shift to the world.
So, we're super excited about that.
And I think that everyone can feel what the magnitude of that transformation looks like.
Your second point is actually not a question that I've been asked many times, and I think it's a great one, so, I appreciate it.
Um, one of the things that I learned at YC, Y Combinator, and also what I learned as I was like a kid studying the history of technology is you can never go too far making a technology easy to use and accessible.
Um, every... You know, every like 10% easier to use, you can make a technology, maybe twice as many people use it, or they use it twice as much, or there's this huge thing.
And so, we had this technology that we knew was pretty cool.
We didn't know quite how much people were going to like it, but we had a sense they were, and we put it out first in an API and, like, some nerds, had a good time with it, but not very many.
And it was kind of like unknown in the world.
We put GPT-3 out in the API.
I think it was in like, June.
Maybe, it was July of 2020, something like that.
Uh, and, you know, people built stuff and other -- But we started thinking then about like, what is... What is the best, simplest, most natural user interface that we can build on this?
And I'd had this observation that computers had trended over time, um, to be as close to the way we interact with other humans or we interact with our physical world as possible.
So, you started out with, like, punch cards to program computers.
I don't know how those people did it.
It sounds amazing to me.
Like, what an unnatural way to use a computer.
And they're, like, literally, like, sorting these things out on the floor.
Wild.
But they did it.
And then, you had command lines, and that was like a little better.
There's somewhat of like a kind of framework I can see for that, but I'm grateful.
I never really had to use those computers.
And then, you had the graphical user interface.
And now, finally, we're getting something towards more like something the way we interact with the world.
And a lot of people started to use it, and we knew how to point at things.
And the mouse was a reasonable analog for that.
The keyboard was kind of fake, but it was like good enough.
And this idea that we had these like windows and graphical information displayed to us, like, we look at the world, we look at a screen.
There were images, that all kind of worked.
Uh, the smartphone was then a huge revolution.
We got to get rid of that keyboard and that mouse and just use our hands.
Like, again, much closer to how we use the world.
And so, we were thinking about what was next in that.
And sci-fi had predicted this.
So, it shouldn't have taken us as long to figure it out as it did.
But you really just want a computer you can talk to like you talk to a human.
We are so finely tuned to use language and the nuance and sophistication of language.
Um, imprecise, though, it is, all of the problems with it that it has.
We can communicate at a very high bandwidth, enormously complex ideas with language.
And so, we said, well, what if we just... go back to this idea of chatbots?
People tried it earlier.
The problem was the chatbot didn't really understand you.
Maybe, now it can.
Let's try to build that, and then, building the chatbot itself, the chat interface itself is obviously trivial, but the question was how do we tune the underlying model to be really helpful to you and really good at conversation?
>>Well, I think we should move soon to questions, but I do want to, uh, pose maybe one final, uh, reflection from this side of the Fireside Chat, and that, it's kind of a two parter, uh, in this world that you're describing.
And as the technology drives towards this change, and as we get those devices that are more like humans to talk with, is there something we lose of ourselves?
And this is in the back, I read this book recently, "I, Human," um, and it raises this question, do we lose something of ourselves as we advance, as we gain?
And then, secondly, kind of behind all of this are the algorithms, of course.
And we as an institution have been wrestling with the criticisms of bias that are in these algorithms.
And so, I was wondering if you could talk to, to both of those questions, and then, we'll open it up.
>>We are going to lose something, that's for sure.
That happens with every technological revolution.
And, even though, I'm confident we're going to gain much more than we lose, it doesn't mean we're not losing something.
And we're all philosophers for good reason.
Um, I'll tell you what I think we're not going to lose, which is, I think we are not going to lose... Two things, the value and depth of human relationships.
How much we care about other humans.
Uh, I think people get excited to talk to AI friends for a while, and that'll be part of the future, for sure.
But you hear people who do that a lot say, "Man, there's really something about knowing it's another human."
And this is like deeply biologically wired in us.
And I don't think it's going anywhere.
Um, we're going to -- We are -- We are so -- Yeah.
So deeply wired to care about other humans.
What other humans think, what other humans do.
The connection we have with other humans, we're not going to lose that.
Um, we're also not going to lose our creative spark, our desire to be useful to each other, our desire to be fulfilled.
The jobs of the future will look quite different than the jobs of today.
I think of that, we can be sure, but there will be jobs, we will find new things to do.
And I hope that those future jobs, if we could know what they were today, we would say that is so trivial and so stupid, such an indulgence.
Those people are like way too rich, way too coddled.
They're wasting their time.
They don't know what it was like when we had to do real work.
Um, I hope that happens.
[ Laughter ] Like, that is the way that I think human progress should go.
Uh, but, you know, we never stop -- We never stop creating.
We never stop working.
We never stop providing value to each other.
We never stop our silly status games.
On the questions of, um, bias and algorithms.
Yeah, there exists bias at every level, at the algorithms, at the data sets, at the way we set the spec for the way these models should behave, the way that people create these labels for them.
And I think we can measure that, um, we can evaluate it, and we can talk about those results as they go.
And it needs to get better every time.
But as these models become more sophisticated and more integrated throughout all of society, the bias will become, um, more nuanced and more important, and it'll take great attention to keep it in check.
>>Sam, before we go quickly to our questions, and there are microphones up front, so, you can begin to make your way up.
Are we going to have an AI shrink?
[ Laughter ] >>Well, we'll have it, for sure, is a, uh... Wow, that's going to be tough.
[ Laughter ] We'll go rapid fire.
>>I will let our students know we've got about 7 minutes, but we'll try to get in what we can.
>>Uh, let me skip that question.
Let's go to this.
>>Let's go straight to questions.
If you could quickly identify yourself as you give your question >>Okay.
Hello, I'm Camila Armas, I'm a freshman political science major from Raleigh, North Carolina, and I'm a part of the inaugural cohort of the Humanities and Social Sciences Scholars Program.
My question is, with the recent boom of AI technology, many privacy concerns have been raised regarding the fact that much of the data that fuels this technology comes from sources who did not give consent to be used in these programs, and are not credited when used by the programs, were you aware of these concerns during the creation of OpenAI and ChatGPT, and are there any measures being taken currently to address these privacy concerns?
Thank you.
>>Sorry, a follow up.
I can talk about either, but in the interest of time, would it more helpful if I focus on the privacy concerns or the intellectual property concerns?
>>Uh, let's say, intellectual property concerns.
>>So, you know, these models are, uh, learning from what they train on, but they're not memorizing it.
And also, we think it's very important as like a principle of intellectual property, uh, that we shouldn't regurgitate content to the greatest degree that we can.
It'll be hard to be perfect at this, because, you know, there could be, like, a Wall Street Journal article that's replicated somewhere else on the internet and not cited as a Wall Street Journal article.
And again, even though, these models are trying to learn, they can inadvertently memorize pieces in some cases.
So, what we want to do is build technology to make sure that when these models are giving you output, they're not infringing on intellectual property rights.
And also that we find ways to compensate IP holders with new revenue streams, we've started doing licensing deals with a lot of news organizations, for example, or a lot of scientific publishers.
And, uh, you know, I think, everyone's excited about the new revenue stream that this can present.
>>Thank you.
>>Thank you.
>>Do you have a question?
>>Hi.
So, I'm Dewayne Dixon.
And I'm actually a current PhD candidate here at Howard University in the math department.
I'm an instructor, also, I teach mathematics for machine learning, but it's a directed reading course.
Um, not exactly sure why we are not actually advertising that.
Um, these young men actually are my former calc 2 students.
That's how they learned about it is actually still full of that.
Um, also, the AI curriculum instructor and development for the Howard Math Middle School, um, which I developed the partnership.
Um, but the problem is it's been on campus for a very long time.
Um, and the partnership developed through, uh, Howard SIAM student chapter, which was developed this past semester.
Um, these are active students as well.
So, um, that is open to everyone, not just the applied mathematics that's computer science majors, engineering, etc.
Um, my situation is what, um, are we going to do as Howard?
Because your platform is amazing, I love it.
I've already created about 6 or 7, uh, GPTs that actually the students can use.
But how are we?
Because I understand we have to be on the back end of the actual mathematical piece of it to drop these biased columns, right?
You don't know that if we're not in the background, right?
So, how can we feed those people into their industry?
Because this is something that's just now popping up, and we're always behind.
>>Thank you.
[ Applause ] >>So, on the Howard piece, we'll be in touch.
Um, yeah.
Thank you.
>>That's a good question, though, right?
>>Oh, it's an excellent question.
Absolutely.
Absolutely.
>>Okay.
>>Yeah, no, I, um... >>Yeah, just as an engineering student currently, um, I know that we would love to see more specialized programs, especially in machine learning.
I was just wondering if OpenAI is actually doing anything to kind of reach out to, uh, more marginalized, um, institutions to get students involved in AI?
>>Yeah, we have a number of programs, but the one I'll highlight is our residence program.
Um, we will, uh, train you to be good at AI.
Uh, we hope you'll maybe want to stay at OpenAI after.
Most people usually do, but you certainly don't have to.
Um, it's for people from engineering backgrounds other than AI.
And, yeah, we'll train you to be an AI researcher and an AI engineer.
>>Hi, my name is Darian Unger.
I'm a professor of innovation in the School of Business here at Howard University.
And thank you for being here.
Thank you for hearing our voices here.
And hopefully, thank you for bringing in some of my students.
Um, we are in the very early baby stages of AI, and, um, many of the first movers in the internet age are no longer leaders.
And I was wondering how you see OpenAI as different as the competitive landscape is different, uh, from, say, the 1990s when, you know, you cited Google, right, before Google there was Mosaic.
Right, and so, I wanted to ask how you set yourself apart and chart a future for OpenAI that will last 20 years, 30 years longer?
>>It's a great question.
I think about it all of the time.
Um, it has evidently been hard for the human feedback where we take the base model and get it to behave in a certain way, and that requires both deciding how it should behave, and then, getting people to sort of say, this is a good response.
This is not a good response or this, you know, this fits the specification and this doesn't.
And having diverse representation at all of those steps is very important.
And also figuring out an agreement as a society on what the behavior should be.
I know I've mentioned this a few times, but it's such a big challenge.
Getting that right requires such a diverse input of voices to do it.
Um, I think that'll be critical to the field going forward.
>>Okay.
Thank you.
>>Thank you.
>>Hi.
Thank you.
Um, my name is Keaton.
I'm a sophomore computer science major here.
And my question is actually towards AGI, because, um, the future after AI indefinitely will become AGI.
And how artificial intelligence has emotions or able to learn from itself.
And, you know, that's where the potential risks come in, where, you know, um, AI actually having emotions, and being able to like, well, the risks potentially.
So, I'm going to ask where OpenAI is at in terms of AGI and how do you plan on balancing out the risks and benefits?
>>Thanks for the question.
It's a, you know, that is probably the thing we think -- and get it to behave in a certain way.
And that requires both deciding how it should behave, and then, getting people to sort of say, this is a good response.
This is not a good response or this, you know, this fits the specification and this doesn't.
And having diverse representation at all of those steps is very important.
And also figuring out an agreement as a society on what the behavior should be.
I know I've mentioned this a few times, but it's such a big challenge.
Getting that right requires such a diverse input of voices to do it.
Um, I think that'll be critical to the field going forward.
>>Okay.
Thank you.
>>Thank you.
>>Hi.
Thank you.
Um, my name is Keaton.
I'm a sophomore computer science major here.
And my question is actually towards AGI, because, um, the future after AI indefinitely will become AGI.
And how artificial intelligence has emotions or able to learn from itself.
And, you know, that's where the potential risks come in, where, you know, um, AI actually having emotions, and being able to like, well, the risks potentially.
So, I'm going to ask where OpenAI is at in terms of AGI and how do you plan on balancing out the risks and benefits?
>>Thanks for the question.
It's a, you know, that is probably the thing we think the most about.
Um, I think AGI is now like such a fuzzy term.
And people use it in so many different ways.
What you're asking about, I think, is closer to what I would call, like, super intelligence.
Not something that can do the jobs that a human can do, but say something that can do research, do AI research itself.
Maybe, as well as all of OpenAI's researchers, and use that to self improve.
Um, and how we think about what the world will look like, when we get to that level, and how we make sure we confront the risk of such a system, um, which are very hard to do.
We have new teams that help us think about being prepared for that world.
Also, technical safety work, to think about how we can make sure humans stay in control of systems that are more capable than we are.
I think it's somehow both going to be stranger than it seems and also, in some other way, much more continuous and much more like the world today.
Humans will still be in control.
But what any human can do, and certainly what any group of humans or a nation can do, will be like vastly, vastly improved.
And... part of the reason that we try to talk about this, even though, it scares people, or they think we're crazy or both, is if we're right, this is a huge deal.
And really important, it's gonna impact all of us in a huge way, and we want the world to have this conversation now.
Like, we know that ChatGPT isn't that powerful.
We know if it was just going to be ChatGPT, none of these things really matter.
But given the steepness of the curve that we're on at the exponential, um, we want the world to have this conversation.
So, we jointly decide how to balance those risks and benefits.
>>Thank you.
Just a last.
How close do you think you are to, like, achieving all that?
>>Um, it's super hard to say.
I hesitate to give...
I'm like always happy to make predictions about what will happen, but when, in research in particular, is super hard.
But I would say that, like, in this decade, we get to very powerful systems.
I personally don't believe towards that, like, thing that can do as -- AI research as well as OpenAI.
But I've been wrong before.
Um, but I would say, like, very powerful systems that a lot of people will say, like, okay, for what I want to call AGI, this is a version of it, an early version, by the end of this decade.
That would be my guess, but could be much longer.
>>Thank you.
>>Hi, my name is Sierra Williamson.
I'm a Junior Honors management major, Africana studies minor from Melbourne, Australia.
And my question is, how does OpenAI plan to protect and support the mental health of workers, human workers who must label and scrub toxic or violent content from open AI systems, particularly, because this work is done by, or often done by communities of color?
>>Uh, great question.
So, obviously provide lots of counseling and support to people.
Um, we've learned more about how this works, and we're trying to do more of it in source with our own team members as much as we can, and we can more control the support we provide.
But...
The best thing I think we can do is start using these AI tools themselves to make sure that humans don't have to look at the worst of the content, or interact with the worst of the content and tools that, like, tools that can help humans have a better and easier experience while having the same or more impact, I think, is a new thing that we can do for the people who are providing this feedback.
>>Thank you.
>>Thank you.
>>Hi, my name is Abhinav.
I'm a freshman computer science major here.
So, my question would be if I think about the negative implications of AI, one that comes to my mind is deepfakes and online impersonation of people.
So, how do you think the industry as a whole can sort of mitigate that problem?
>>There's two different directions I can imagine that coming from, um, one is, when people say something themselves or when they endorse a particular image, there's like a cryptographic signature other people can verify.
And you say, this really is a picture I took, or this really is a quote I said.
Um, and we as a society decide that, you know, we're just flooded with generated media.
And back to that point about humans caring about other humans, we're going to, like, have these networks of trust and we'll say, all right, you know what?
If you didn't sign that photo, I'm going to assume it's not real.
Um, and if someone didn't sign it, that I trust that I trust that I trust, and I don't have that chain, I'm going to assume it's not real.
So, that could happen.
Um, the other thing that could happen is that we have enough rules in place on the powerful AI systems that exist, that there's a watermarking process that everybody kind of enforces.
Um, but with either of those paths, there will be a huge amount of generated content on the internet.
And I think society is just very quickly going to evolve to understand, not to take it too seriously.
>>Thank you.
>>Uh, before we take another question, I want to make sure you're okay taking a couple more questions.
>>Sure.
>>Okay.
5 minutes.
Yep.
Great.
>>Hello, I'm Alex Blocker, a sophomore political science major from Columbia, South Carolina.
And my question is, what role or initiatives do you think OpenAI and AI will play in enhancing collaborative, uh, education and the way that humans will end up learning in the future with AI?
>>It is one of the few areas that we're the most excited about, um, students and teachers were the first, like, massive adopters of OpenAI, of ChatGPT, and have continued to surprise us at every step of the way with what they're doing and how much of an impact it's already having on education.
In fact, very few students pay for GPT-4, so, most of them are just using the free version and still seeing all of these positive results.
Um, some of the GPTs that launched in the store a couple days ago, I think it was, uh, are great educational experiences, but we can see a path to a world where every student gets a great AI-personalized tutor, and that will transform how they learn.
Uh, you'll still need human teachers, for sure, and to provide much of the support, but the amplification this can have on what a teacher can do.
I hope we get to a world where every college student starting, you know, 18 years from now, is smarter than any of you freshmen here in this room.
Like, I think that'd be a great triumph.
>>Thank you for your time, Reverend Vince Van.
I come from the School of Divinity, as well as the, sorry, as well as the religious affairs chair for the NAACP, DC chapter.
Uh, my question is, uh, so, 15 years from now, uh, we're going to see young people trust AI more than they trust any other thing.
Right?
And the tough part is in a place like Florida.
We know that certain, uh, books and even history from a black lens is being removed.
So, what is the work that's being done?
Are you all looking to engage with, um, historical, uh, black institutions regarding the information that's going to be on, um, but also theological spaces, so, for instance, I use Chat as well.
But those moments where you ask a question about, you know, slavery or a text and the scripture, and there's a reference to say, you know, make them think about the lens of the time period.
And it kind of softens the reality of what actually happened.
Um, so, my question is if we're removing actual history from other places and young people in 15 or 20 years from now are going to trust ChatGPT.
How do we trust that your company is going to do the work to engage with those historical black institutions, to make sure 15, 20 years from now, that accurate information is on OpenAI.
>>Um, I'd love to talk to you more about that.
[ Applause ] >>I mean, this is like a big part of why we try to get out into the world and talk to people that are not the people in SF every day.
We have got to figure out a way that what gets encoded into these models, not just the knowledge, but the culture and the context and how things get explained, brings this diverse perspective, the whole world into it.
Um, so, if there's areas we're missing the mark there now, we'd love to, like, talk about those.
But more than that, we'd love the input on the principles going forward and how to set up a reliable process to make this work.
Um, one of my, like, sayings to people is better every rep, like, I believe that contact with reality, um, and putting this out and getting hundreds of millions of people to use it and tell us what's good and what's bad, what works and what doesn't, is the only way to make it better.
So, we make the thing as good as we know how to do, with as much input as we have at the time, and as much kind of compute and data resources we have at the time, we build something, we put it out.
People make a comment like that, we go think, like, all right, we got to figure out how to get better at this.
That's how we do it.
>>But before the Provost is, uh, giving us the signal, I want to take you up on your offer.
People have to get out of SF.
Uh, we've got a lot more questions.
Uh, we could continue this conversation all day.
Uh, we'd like to invite you back at a future point, uh, to continue the conversation with Howard.
[ Cheers and applause ] >>And we know that Mr. Altman has a hard stop.
And there are a number of additional questions.
But we wanted to honor our commitment to him.
But, maybe, we can use AI to create more time.
[ Laughter ] But I certainly want to offer our thanks and appreciation to Mr. Sam Altman, to President Vinson, and to to Dr. Sutherland, and to his point, we certainly want to be, as Howard University, a convening space where these types of important conversations take place.
And I know we have a colleague here from Microsoft, and we have a commitment.
We have a commitment that we're going to have other industry leaders in AI be a part of this conversation with our students and with our faculty and staff.
And I was also reminded by a colleague from fine arts that we should be mindful that artificial intelligence and ChatGPT is not just a function of technology and technology disciplines.
It's increasingly used and important in the fine arts and the humanities, in the social sciences and as an institution, what we want to make sure that we are doing is that we are providing the space where our social scientists, our experts in humanities, our experts in fine arts, have access to this technology, have access to data science, have access to AI, so, we can answer the broader societal questions that we need to answer regarding health care disparities, regarding socioeconomic disparities, regarding criminal justice reform.
And that's who Howard has been, and that's who we're going to continue to be.
So, please join me in thanking our panel.
[ Applause ] And I also want to thank our students.
Those were excellent questions.
It just... further highlights the talent that is available here at Howard, and I would invite our students to take Mr. Altman up on his offer that they're looking to hire more students from Howard and other students of color.
And so, we want to continue to make sure that we're providing that opportunity as well.
Thank you for being here.
Thank you for your questions.
And we look forward to the next opportunity to have a session of this type.
>>This program was produced by WHUT and made possible by contributions from viewers like you.
For more information on this program or any other program, please visit our website at whut.org, thank you.
Support for PBS provided by:
At Howard is a local public television program presented by WHUT