Support for this episode comes from AWS.AWS Generative AI gives you the tools to power your business forward with the security and speed of the world's most experienced cloud.
Hi, everyone.This is Pivot from New York Magazine and the Vox Media Podcast Network.I'm Cara Swisher.
And I'm Scott Galloway.And you're listening to our special series on AI.AI has changed our personal and professional lives in many ways, with plenty of more changes to come.
Here to talk with us about how AI is impacting work and what people need to do to stay ahead is Andrew McAfee.
Andrew is a research scientist at MIT and author of The Geek Way.Welcome, Andrew.
Hi, Cara.Hi, Scott.Thanks for having me.
So you've been studying AI for over 10 years.You've also co-founded Workhelex, a startup that helps companies figure out where to deploy generative AI.Scott and I talk about it a lot.
Scott says if you're not using AI, you're like a person who says they're not using computers and you're fired.Can you talk about how AI has changed the workplace and where people are with it right now?
I'm completely with Scott, and I realize this makes for a boring podcast if he and I just agree with each other all the time.But it's pretty clear that AI and specifically generative AI are already changing the world of work.
And there are lots of different ways to think about that.Here's my favorite.What we have as of the dawn of AI is an army of clerks colleagues and coaches who are available on demand to just about any professional.
So a clerk, for example, is somebody who will listen to the meeting and transcribe the notes and give you the highlights of it or transcribe a patient meetings.That's kind of low level grunt work.
And we now have an armada of technologies that are good at it.A colleague is somebody who will translate a foreign language document for you or help you tighten up a thing that you've written by 20 percent or change the the writing level of it.
Or for me, because I'm a terrible artist, I've now got a colleague called Midjourney who figures out the artwork for me.And a coach, I'll give you a silly example.
I love crossword puzzles, and there are a bunch of words that show up over and over again in crossword puzzles, but have silly definitions.And I use AI to quiz me on those so I stay sharp on them.
A much better example is something that my co-founder Eric Brynjolfsson discovered when he did research. It turns out that when they deployed Generative AI in a call center, performance went up by about 15% very, very quickly.That's great.
The cooler part is that the biggest performance gains were found among the newest and the least skilled employees.
So what was going on was that Generative AI was kind of whispering in their ear, prompting them to say what they should say in real time to a customer on a chat.
I think all three of us have had careers in tech long enough that we've heard about knowledge management over and over and over again.And it's been a bitter disappointment overall.
I think we finally have incredibly powerful knowledge management technologies.So just about any of us can have a coach now.So I think of Clerks colleagues and coaches available on demand to all of us.I get pretty excited.
All right, Scott, since this guy agrees with you violently, it seems.
Yeah, I like the terms used more.I would say it's having a thought partner shadowing you.But so there's, well, I'll put forward a thesis, a two-part thesis on what you'd respond to it.
One, the catastrophizing by people after they've vested their options, I just find really obnoxious and overdone, quite frankly.I don't believe it's going to go sentient and kill us.I mean, and this comment might age.
Yeah.If it ages poorly, we have bigger problems, right?Amen.
One, do you believe that this imposes the kind of threat that a lot of, and the thing I can't stand about is now that I have my $10 million in Google stock, I'm going to talk about how worried I am about it, that I created this monster.
Anyway, one, do you think the catastrophizing is warranted, overdone, underdone?And two, given that every technology has externalities, what are you most worried about?
Can I just add some facts around this?
This is a job killer question too, and a 2023 Goldman Sachs study estimates 300 million jobs could be lost or diminished by AI, though the cost saving from the technology companies will ultimately help increase the annual global GDP by 7% according to the same study.
Elon Musk recently predicted AI will get humans to the point where no job is needed, which is probably overstating it, but go ahead.
Yeah, and the Robotaxi was here in 2019.Yeah.I'm sorry, go ahead.
Yep.So there are about three different kinds of risk that I keep hearing about.And again, I'm sorry to be boring in advance, Scott.I'm again going to violently agree from you.There's the existential risk.This is the super intelligence killer AI.
Right, Skynet.Skynet thing. I think it's put out by people who A, have vested haha options and or people who read way, way too much sci-fi when they were sophomores and didn't really get past it.
I always fall back on what Andrew Ng says about this question.He's one of the founders of modern machine learning.And I love his response. He was at Google.He's founded a bunch of companies.He's an investor in Work Helix.
And to me, he's nailed this issue.He said, worrying about Skynet and superintelligence is like worrying about overpopulation on Mars.I guess it's theoretically possible.Boy, do we have more important things to worry about.
Now, there are other real harms that are going to come along with any technology as powerful as ZAI.We know it's going to be used for misinformation and disinformation.We know the bad actors are going to grab this technology and do things with it.
This is a real problem.We need to deal with it.But in a sense, it's not new.All powerful technologies, especially ones that are pretty democratic, are going to be used by the bad actors.
Cara, you brought up the economic risks, and in particular, the risk about massive technological unemployment. We have been worrying about this since the Luddites were smashing looms 200 years ago.
Every powerful technology brings along another wave of— Well, they did have a point, to be fair, but go ahead.
I'm cheesy.Well, they did, but let's get to what the point is.Every powerful technology is part of the cycle of creative destruction, and with 200 years of history, So far, the pattern is incredibly clear.
The creation outweighs the destruction, and that has to do with jobs as well.The problem we face in America right now, and in most of the rich world, is absolutely not technological unemployment.We have chronically low levels of unemployment.
We don't have people at every level of skill and education to do the work that needs to get done.
The idea that that's going to change in five or 10 years because of a technology, even a technology as powerful as AI, just doesn't make any sense to me.Now, Cara, to your point, the Luddites did have a point.
The skilled weavers did see their employment decline.We used to point to bank tellers as an example of how even when there's a new technology called the ATM, demand for bank tellers still goes up.
That was true until about 10 years ago, and now bank teller unemployment has been declining. So we can see technology displace workers in specific jobs.
We can see technology displace or reduce employment in overall industries like agriculture and manufacturing, even though output from those industries continues to go up.
But the creation part of creative destruction is really, really easy to underestimate.And what's been happening is that the innovators and the entrepreneurs have kept on coming up with new things that need to be done that can only be done by people.
So in that vein, what are some of the most important AI skills and tools that employees need to be learning right now to stay relevant and related?What are the best and worst cases you've seen of AI in various professions?
In the short to medium term, the skills are keep doing your job, but realize that you have this very powerful clerk colleague coach alongside you that is going to take over sometimes really big chunks of your job.Translator is a great example.
And I was shocked to learn earlier this year that as far as we can tell, the number of translators employed in the United States continues to go up.Now, it's not millions and millions of people, but that job is still increasing in employment.
And that's weird to me until I thought about it a bit more deeply.And I realized that translating the document is actually only one of the things a translator does.And now they don't start with a blank sheet of paper.
They start with a machine-generated translation.But that translation is gonna have mistakes. It's not going to be very good at some kinds of idioms.It's going to be imperfect.So the proofreading, the copy reading becomes really, really important.
And the importance of negotiating with the client, making sure you've got the gig right, making sure you deliver a clean product that's free of flaws, those skills become more valuable and translators see demand for their services go up.
That's the pattern we're going to see a lot of.
So instead of trying to predict any massive changes in how a particular job is going to be done, the one thing I can say with huge confidence is that technology is available to help you with any knowledge work job that you have right now.
And the sooner you start exploring the toolkit and putting it to work and learning, the better off you'll be.
So this is what I do advise.Just use it and you'll see.Best and worst cases you've seen very briefly?
I'll give you the absolute worst case, because it's me.When I was finishing my last book, I deployed generative AI as a clerk to help me do the citations for the book.You know, the messy end note stuff in the back.
And it engaged in what's called hallucitation.It made stuff up.This is a terrible, terrible use case, and I should have known a lot better for it.But at Work Helix, we've looked around, and we found that if you add
additional technologies under generative AI, something close to 50% of the jobs in the economy can see a huge portion of their tasks massive, substantially improved with no quality loss via AI.
So the mantra that we have inside the company is plan a little and iterate a lot to figure out how this is going to change your work.
Okay, let's take a quick break.We come back, we're going to talk about how companies can integrate AI into their workflows.And we'll ask Andrew for some predictions.
Support for this episode comes from AWS.With the power of AWS Generative AI, teams can get relevant, fast answers to pressing questions and use data to drive real results.Power your business and generate real impact with the most experienced cloud.
Support for this episode comes from AWS.AWS Generative AI gives you the tools to power your business forward with the security and speed of the world's most experienced cloud.
Scott, we're back with our special series on AI.We're talking to MIT research scientist, Andrew McAfee.Scott, what would you ask him?
Yeah, so Andrew, one threat or one externality I don't think gets enough attention is that I worry that a lot of people are struggling with loneliness and AI will only replace more and more relationships and decrease the motivation to go out and make friends, pursue romantic relationships because these AI bots.
The search volume for AI girlfriend has skyrocketed.Have you thought at all about the role AI might play in loneliness, specifically amongst young men who are kind of increasingly sequestering from society?
Yeah, and you bring up this loneliness epidemic, which I'm hearing a lot about.My read of the evidence is a little bit more ambiguous, but I'm not saying that there is no loneliness epidemic.
There clearly are people who are dropping out of society, and technology, especially modern technologies, are really good if they want to do that, at helping them do that.
It feels to me like it's a bit related to the anxiety epidemic among adolescents, which I do think is a real thing.And social media is a part of it.
So again, this bundle of extraordinarily powerful technologies that we have is going to bring some harms and some downsides along with it.And I think you've nailed probably an important one.
What I believe is that we need societal community policy kinds of responses to the harms that technology brings up, as opposed to trying to make sure that technology can never, ever do that harm.
So I would not want to call a halt or put guardrails on conversational AI just because I'm worried about a loneliness epidemic.
In other words, I would try to think of other things that we can do to get people out of the room, get them to engage with each other in this pretty atomized world that we live in.And Cara, you bring up the workplace.
And there's a massive controversy.
Because if you're getting rid of coaches and these are all people you interact with.
Amen.And I think this is a real challenge.Like I say, we have very, very powerful knowledge management technologies now.However, we humans learn from each other.We are the most social species on the planet.
And I think we need physical proximity at least sometime to pick up and accelerate that learning.
So along with the return to work wave that's happening now, we also have to figure out how do we use AI and how do we coach and advance our skills with AI.
So it's interesting, I just interviewed Seth Meyers and he said the things he misses is around the table, stuff with young comics, they're online, they don't need to go to Chicago and meet a group.
It was really interesting because it had the same implications.But if you're advising a CEO or business leader about integrating AI into their company right now, what would you tell them?
And given these capabilities are growing so quickly, how can you stay ahead?You feel like you're perpetually, and it probably does remind me of early internet days, like use this browser, use this, use that.
A lot of them were bullshit, but I tried them all.It was an enormous waste of time, but it wasn't in a weird way.
Yeah, it was and it wasn't, right?And what you, Cara, what I think you and I learned back in 1994 or whatever, was how to surf the web and the value of surfing the web.
So the fact that, you know, Firefox might not have won out, or what was the Norwegian called?
There were so many, there were so many, I forgot.
Yeah, there were so many.That the particular one you were using didn't pan out was not that relevant, it turns out.They weren't crazy expensive, and the broad skill about how to use the web effectively,
The quicker you started with that, the better off you were.
And in particular, when a technology this powerful comes along, it's called a general purpose technology, on a par with the steam engine, I think, sitting on the sidelines and waiting for the environment to calm down, I believe is a terrible, terrible idea.
Because instead of the first mover or the fast follower, I think the race here goes to the fast learner.And the way to learn is to do a little bit of planning and a whole lot of iterating.
I worry, Andrew, that just as time to mass adoption, the cycle time is decreasing across products that are truly breakthrough, including AI, that you also see the emergence of a dangerous concentration of power.
I see a new kind of Wintel-like duopoly forming, and that is what I affectionately refer to as OpenVIDEA, that OpenAI and NVIDIA with 70 and 93 percent share, respectively.
Do we have a new duopoly that we should be thinking about breaking up already or at least keep our eyes on it?
Oh, great, we get to disagree.So my answer to your great question is yes and no.We do have a new duopoly.
I don't think we need to be getting out the antitrust hammer and whacking on open AI and NVIDIA because, Scott, your career like mine is long enough to remember when we were terribly worried about IBM's dominance and then terribly worried about Microsoft's dominance.
And your duopoly did not include Alphabet slash Google, which is now the subject of antitrust scrutiny. So the pattern that I think all three of us have seen over and over in high tech is what I would call dominance and then disruption.
There are gigantic, really powerful tech companies, and they seem unassailable.And then they get a sale from some quarter that we weren't expecting.
It's bizarre to me, Scott, that the duopoly you bring up features a startup, features how old is open AI, founded out of a weird nonprofit, and now it's the one that we're worried about. I do not have a crystal ball.
If and when the three of us come together in five years, let alone 10, we're going to be worried about another duopoly or crazy concentration of power, not the one we have today.
And that's not because the Justice Department or the FTC broke things up with the hammer.It's because tech is a very, very hard industry to stay on top of.
Yeah, that's absolutely true, although it is the same people.You know, now I'm remembering a thing.I forgot the Pointcast Network.Remember them?Oh, man.
This is a push technology.I see you.
Yeah, we were doing all that.
Yes, you've already.He's smart.He hit the bed.
Oh, there's so many.Oh my God, there's so many.And there will be in this too.So last question, Anthropic CEO, Dario Amodi, recently wrote an essay predicting AI's utopian future.He's one of the safety people.
So that was interesting, this world where AI has achieved super intelligence and where the progress is achieved at a faster pace.Under hyped, over hyped, I guess that's kind of too broad a question.
And can you give us a prediction for how you think we'll be using AI in five years and 10 years?So that's a two-part question. Overhyped, what did you think of the Amodi essay?
I think hyping the superintelligence, singularity, Skynet stuff is really, really overdone and premature and actually harmful to the industry.
I heard a guy at one of these superintelligence conferences a while back say to the assembled people, you all need to fire your PR department.
This is attracting scrutiny from regulators, which only incumbents want, and it's turning the public against these extraordinarily powerful technologies that we need to cure disease, get through the energy transition, and do our 21st century homework.
I agree with Dario that we are creating crazy powerful technologies.We're not done yet.But my prediction for the next five or 10 years is, first of all, we'll be talking about different, very powerful technology companies on top.
And second of all, that the companies that get off the sidelines, that plan a little and iterate a lot, we're going to see the performance gaps that already exist and are real across the economy.We're going to see those gaps get bigger.
And I'm actually really worried about the incumbents of the 20th century, the really successful companies that got built up over the course of the 20th century.
Because as far as I've seen, they're getting their butts kicked by a bunch of kids from Silicon Valley.And I want the incumbents to make it a less lopsided fight.That's what I'm keeping my eye on for the next five or 10 years.
And the incumbents being?
Think of the incumbents in the auto industry.Think of the incumbents in the space industry.Think of the defense industry.Think of entertainment.Hollywood was a very, very stable place for a century.
And then a DVD by mail company comes along and stands it on its head in less than 15 years.This is bizarre, right?We don't see this very often in businesses.
Fine, they deserved it, but no one was going to do it for them until the DVD by mail company comes along.
They deserved every bit of it.I'm sorry I was there. No, we're better served today.That's great.Right.Very last question, Scott.
If you're a kid coming out of college, or maybe not even college, but you have some aptitude for technology, you want to work hard, are there specific industries or specific skills where you think, I mean, I don't know about you, but I had no idea what I wanted to do at 22.
I just knew I wanted to be successful at some point. and I was trying to build a base.
What is that in your view, if you were coaching someone who really was open to almost anything in terms of where to live, what industry to be in, what skills to acquire, what advice would you have for them?
I love this question because my answer to it is both vague and adamant.
The adamant part is get yourself to a concentration of alpha geeks, apprentice yourself to them, and learn to build stuff and make stuff and think about business in the world the way that they do.
So, you know, go knock on the door of the people pushing the envelope the hardest in the area that you think you're interested in, and just go put yourself by their side and learn how they're doing stuff.
This is different than go work for one of the great advisory companies of the 20th century, a bank, one of the industrial giants of the 20th century.The future is already here.It's just not evenly distributed.Go learn from the alpha geeks.
Define the way that you want to define it.
Okay, thank you, Andrew McAvee.Really interesting time we're going through, going forward.It's a really interesting topic.As we all say, just start using it.We keep saying that to you.Stop not using it, use it.
It's like saying, I don't want to use a telephone.I don't want to use a telegram.I don't want to use television.Just use it.You'll see and figure it out yourself.Okay, Scott, that's it for our AI series.Please read us out.
Today's show is produced by Lara Naiman, Zoe Marcus, and Taylor Griffin.Ernie and her taught engineer this episode.Thanks also to Drew Burrows and Ms.Severio.Nishat Karwa is Vox Media's executive producer of audio.
Make sure you subscribe to the show wherever you listen to podcasts.Thanks for listening to Pivot from New York Magazine and Vox Media.You can subscribe to the magazine at nymag.com slash pod.
We'll be back next week for another breakdown of all things tech and business.
Support for this episode comes from AWS.With the power of AWS Generative AI, teams can get relevant, fast answers to pressing questions and use data to drive real results.Power your business and generate real impact with the most experienced cloud.