■
A.I. is a Religious Cult with Karen Hao
0:00
hello and welcome to Factually I'm Adam Conover Thank you so much for joining me on the show again This week on the show
0:05
we are once again talking about a Sam Alman company You might have joined me last week when I did a whole episode on
0:11
my ill- fated decision to do a video for his company World Uh very funny [ __ ]
0:17
on my part that I very much regret and I discussed it extensively Check out that
0:22
video I'm sure you'll enjoy it uh more than I did At any rate um but this week
0:27
we're talking about Sam Alman's main gig Open AI because this company is simply
0:32
put one of the most important things happening on Earth right now And by important I don't necessarily mean good
0:40
See Sam Alman is out there telling Congress podcast interviewers the press
0:45
anyone who will listen that he is in the process of building an artificial general intelligence that is going to
0:51
destroy the world And yet he needs as much money as possible to build it as quickly as possible He has raised more
0:57
money than any tech company ever has in history And he's allied himself with the Trump administration in an attempt to
1:03
get even more government support to build his AI even more quickly Now if you're a skeptical person you might
1:08
listen to that and think "This sounds ridiculous I mean can this man actually believe that he's creating a god out of
1:14
silicon chips?" Well actually as my guest today on the show is going to argue yes The AI industry from the
1:23
inside actually resembles in many ways a religious movement a a religious
1:28
ideology about the future And whether or not that ideology is true even more
1:34
importantly on a material level these companies are transforming our world And
1:39
she compares them to the imperial powers of the late 19th century That these AI
1:44
companies are literally empires unto themselves This interview was absolutely
1:50
fascinating and gripping I know you're going to love it Before we get to it I just want to remind you that if you want to support this show you can do so on
1:56
Patreon Head to patreon.com/adamcon Five bucks a month gets you every single one of these episodes Adree helps us bring these
2:03
wonderful interviews to you week in and week out And of course if you want to come see me on the road I'm doing
2:08
standup comedy Head to adamconver.net for all my tickets and tour dates Coming up soon I'm headed to Oklahoma
2:14
Washington State We're adding new dates all the time adamcon.net And now let's get to this week's interview with Karen
2:20
How Karen is simply put one of the best reporters working today to cover open AI and the entire AI industry And she has a
2:27
blockbuster new book out now called Empire of AI: Dreams and Nightmares in
2:32
Sam Alman's Open AI Please welcome back for her third time on the show Karen
2:39
How Karen thank you so much for being on the show again Thank you for having me back It was wonderful to talk to you I
2:46
think it was a little over a year ago uh maybe a little bit more We were talking about uh the uh crisis at the top of the
2:54
leadership of Open AI Um since then Sam Alman has retaken the reigns The company
3:00
has only gotten bigger and more powerful What is its place right now in the tech
3:06
world and frankly in America at large yeah that's a great question I mean it's interesting because within Silicon
3:12
Valley I think its position in terms of being a leader in research has weakened Uh it no longer really retains its
3:19
dominance as in terms of the the cutting edge nature of its models There are a lot more competitors in the space Um
3:26
they're they're keeping up quickly There's also a lot of open source models that are rapidly catching up Um but in
3:34
terms of opening eyes's position in the US and the world um as both an economic
3:40
and political power it has certainly grown because of Sam Alman's ability to
3:46
very strategically sort of maneuver himself into positions of power and align himself with other people in power
3:51
And so he has very effectively aligned himself with the uh the Trump administration and President Trump
3:57
himself Um and you know most recently was in the UAE with President Trump by
4:03
his side striking deals um in the Gulf States to try and continue getting more
4:08
capital uh and building more data centers around the world So um from that
4:13
perspective it is it is truly elevated itself to a new echelon of power And how
4:19
has Al Alman done that and what is he able to do that other tech executives are not he is a once in a generation
4:26
storytelling talent Uh he is just able to really paint a persuasive vision of
4:34
the future and to get people to really want a piece of that future And he also
4:40
has he has a loose relationship with the truth So when he's meeting with
4:46
individuals he what what comes out of his mouth is more tightly correlated with what they want to hear than what he
4:54
necessarily needs to say And I think this is incredibly effective with
4:59
President Trump in I mean it's effective in general with many many people But yeah but Trump loves to be told what he
5:06
wants to hear specifically is effective with President Trump And uh I think he
5:14
essentially sold Trump on this idea that the Stargate initiative um having huge
5:20
gobs of investment um come into the US for building out compute infrastructure and also building out compute and
5:27
bringing uh American AI infrastructure all around the world could be part of
5:33
his presidential legacy And so I think that is what's enabled him to facilitate
5:39
this a very tight coordination with the government the US government But you're saying I I want to drill down a little
5:45
bit You're you're saying that he's he's a really good liar basically that he's
5:50
that he is uh really good at convincing people to do what he says and give him
5:56
money That's a little bit different from being say a once- ina generation product
6:01
talent or engineering talent or you know you know like uh a Steve Jobs figure I'm
6:07
sure Steve Jobs is also very persuasive right but he but he also had a talent for uh product design and that sort of
6:12
thing Um and Steve Jobs also had a talent for storytelling and for not necessarily engaging in the truth Um and
6:19
I think Alman very much worships job in jobs in that regard and a lot of Silicon
6:25
Valley worships that kind of ability to craft extremely
6:31
persuasive visions of the future And so I I do think Altman is very much a product and a pinnacle of Silicon Valley
6:39
Yeah Uh but in some ways it seems to be like when I look at Sam Alman I see
6:45
someone who is spinning a vision but I'm unsure how much reality is behind it And
6:52
that's just the vibe that I get you know like yeah Steve Jobs was like a great salesman but he was holding an iPhone
6:58
you know what I mean he was holding an iPod Um and a lot of my criticism of Open AI has been hey you know chat GPT
7:05
is like pretty useful What is the case for all of this massive investment
7:12
though you know how much improvement has there actually been etc It seems like it's it's more divorced the storytelling
7:19
from the reality to me But the again I I don't dive into it nearly as closely as you do What does it look like to you i I
7:26
think you're hitting on a very very important observation here which is that unlike a physical product like a
7:32
smartphone AGI or AI artificial general intelligence this is so poorly defined
7:38
as a term And so if you're going to make your objective rate your objective is to race
7:45
towards this unknowable goal Yeah there there's going to be a lot of a huge
7:51
divorce between narrative and reality Um because no one can really articulate what this goal is and what it's going to
7:57
look like and who it's going to serve Um and I think this is really much a product of the fact that AI as a field
8:05
even back when it first was founded in the 1950s um it pegged its its own
8:11
objective on this idea that they wanted to recreate human intelligence And to this day we scientifically have no
8:18
consensus on what human intelligence is right and so if you're trying to recreate something that we still don't
8:24
really understand yeah you're going to get a lot of handwaving You're going to get a lot of future vision painting
8:30
without actually any grounding in concrete examples concrete details or
8:35
concrete facts Yeah And so you get this effect where Altman goes on these podcasts goes before Congress and sort
8:43
of tells this story of AGI is coming and it's going to do XYZ and well what is
8:50
that story that he is telling like you've I'm sure you've heard him tell it many times What is the sort of core of it it has gotten more dramatic over time
8:58
the more money that Open AI needs to raise and the more they need to ward off
9:04
regulation the more the stakes rise So you know there
9:11
there's some core like tenants in the AGI mythology One of them is AGI is
9:16
going to cure cancer It's going to bring us super affordable amazing healthcare to everyone It's going to solve climate
9:23
change it's going to wave the wand and and wipe away poverty Um but you know he
9:30
said in a blog either at the end of last year or the start of this year that we are now entering the intelligence age
9:38
and the things that will happen in this age are so profoundly utopic that we
9:44
can't even imagine them So he he was upping the ante saying you know even curing cancer and solving climate change
9:50
is is not sufficient to contain or describe the sheer orders of magnitude
9:58
of abundance and prosperity and goodness that is going to come I mean how is this not a religious cult you know like that
10:06
that sort of Yeah In the future I can't even describe to you all the good things
10:12
that you're gonna get like it it's beyond the bounds of human language to to even begin to list all of the wonders
10:20
that AI will bring you I mean this is it's by definition sort of nonsensical
10:26
Um uh and and yet it it people are doing what he says as a result of it Yeah I
10:33
think I think it is exactly right to think of this as a quasi religious movement And one of the biggest
10:39
surprises I think when I was reporting on the book was how much of a quasi
10:45
religious atmosphere is surrounding the AI development um and and and sort of has
10:52
gripped the minds of the people within Silicon Valley who are working on this thing And there are two two sides of
10:58
this religious movement They're all within the religion of AGI but there's one side that's saying AGI will bring us
11:05
to utopia and the other one that's saying AGI will kill us all Um but ultimately it is all kind of rooted in a
11:13
belief It's rooted in belief There's not really evidence that they're pointing to It is their own imagination that is sort
11:19
of projecting their fears their hopes their dreams of what could happen Um and
11:27
they also very much have this narrative when they paint this religion that
11:33
because the stakes are so high and really this is a make or break it moment for humanity that they alone are the
11:40
ones that have the scientific and moral clarity to control our progression into
11:45
that future It's such a strange pitch though Like the pitch is there's a
11:51
meteorite there's an asteroid coming for the Earth and it's going to wipe out
11:56
humanity and also I'm the one creating the asteroid Like I'm in charge of the
12:02
asteroid and so I'm in control of how it's going to hit and so you want me to
12:08
you want to be on my good side so that it doesn't hit your city Is that basically the idea like it's it's so
12:14
strange Yeah it is It is I mean it takes religious rhetoric to a different level in that you don't believe in a god that
12:23
is higher than you You believe you are creating the god And I you know the thing that was
12:30
surprising was I thought this was originally rhetoric and it's not for many people For many people it is a
12:37
genuine belief that this is what they are doing This is their purpose Um
12:43
especially for people in the in the doomer category the people who believe AGI will kill humanity I I interviewed
12:49
people who had very sincere emotional reactions when talking to me about the
12:55
possibility that this could happen Their voices quivering them having just a lot of anxiety and a lot of stress about
13:03
really viscerally feeling that this is a possibility And I think a lot of that
13:10
stem I mean that that that anxiety is a really core part of sort of
13:16
understanding how AI development is happening today and the thrash and the all of the headlines and the drama and
13:22
the board crisis Um because when you put yourself in the shoes of people who genuinely think that they are creating
13:29
God or the devil uh that is a that is an enormous burden to bear And I think
13:36
people really do kind of cave under that pressure Yeah I mean if I met anybody who said
13:45
that their job was creating God or the devil and trying to choose which was
13:50
which I I would say you need psychiatric help Like I I would be I would be concerned for the person you know
13:56
because I in everyday human life I don't really think it's possible to do so I I
14:03
know that these folks have intellectually convinced themselves that this is the case Um but when you're
14:11
saying this it's part of what makes me go okay is is this entire industry not insane you know that that people believe
14:17
this or am I really meant to take their side and take their word for it that
14:22
this is what they are doing or is this like a mass delusion that's happening
14:28
within in the or I mean if you talk to people that's in Scientology right they'll say oh no I I really uh we
14:34
really have to you know uh free arans or you know Zenu is going to come for like they believe it right and they have a
14:40
whole system of thought and you can't really talk them out of it and they can be very convincing when they talk about it but you have to take a step back and
14:45
go "Hold on a second You're in this sort of mass delusional organization." Is that what Open AI seems like or what you
14:53
know not just Open AI I think Silicon Valley has gone on a progression in the
14:59
last 20 to 30 years where it originally started as a group of renegades that
15:06
were thinking about we can change the world but without actually evidence to
15:11
substantiate that just big bold ideas And then there was the era in which
15:16
Silicon Valley companies did change the world and for a while people thought it was good and then people realized that
15:22
it was not so good Um and now we're sort of entering another era where all of the
15:29
people within Silicon Valley have already seen the profound impact that their own creations can have So I think
15:37
that's sort of what's happening is you already have evidence that the actions
15:44
you take can have global impact and the stories they tell themselves about
15:51
the morality they have to uphold or the responsibility that they have to uphold in that kind of environment where there
15:58
is a lot of evidence pointing to how important their decisions are I creates that kind of quasi religious fervor
16:06
around the whole thing because in the past Silicon Valley has made massive
16:11
disruptions to our way of life new communications technology They've wiped out I don't know taxi cabs whatever You
16:16
know we can go we can go down the list of of things Um but a lot of times when
16:22
I'm looking at the promise of AI/ AGI it seems like they are trying to postulate
16:30
well this is how we got all the money in the past was by creating all this disruption So we've got to like postulate the biggest disruption
16:36
possible and then we'll get the most money Um because that's sort of like our fundamental sales pitch to the US
16:43
government to Wall Street to humanity Um it doesn't mean it's true There's lots
16:49
of companies that you know I don't know Ferinos or whatever right have said we're going to change everything and it was just they were just saying it and it
16:55
wasn't it wasn't true And uh open AI and
17:01
the AI industry more than other companies does sort of look like they're playing out this sort of thought experiment of okay we're building some
17:07
cool technology now but we're you know because of that then B will happen then
17:13
C will happen then D will happen then E will happen then we'll have created God and
17:19
It's I mean do you find it credible no We're seeing Silicon Valley evolve
17:26
into the most extreme version of itself But no we should not be buying their
17:34
word We have plenty of evidence from the past to know that we need to be
17:40
extremely skeptical of what they're selling us because ultimately as you said they create these
17:48
narratives because they realized in the past that these are the narratives that help them make money And we are now in
17:54
an era where it it's not just money There's also ideology quasi religious
18:00
ideology that is driving the whole thing But yeah we we still need to be deeply deeply skeptical It's the same people
18:06
that gave us social media and smartphones and now we've pretty conclusively determined that these are
18:13
not actually being the most profoundly beneficial tools in our society or to to
18:20
individuals or to kids Um and those are the same people that are creating AI So
18:26
we need to take a step back and recognize that Folks you know we've talked a lot on this show about
18:31
political polarization in America how we're stuck in media bubbles and how it's so hard to know whether the
18:36
information that you're getting is accurate and unbiased Well you know what I use to help me wade my way through the
18:41
thicket of American political media ground News Ground News is this awesome news aggregator They gather up all the
18:48
news for you and give every single source a bias and a factuality rating So you know if the source you're reading is
18:53
from the center right the far left That doesn't mean that what's in it is false just means you should know the perspective that they write from The
19:00
same goes for the factuality rating where Ground News gives you an actual rating of how factual each source is so
19:06
you can avoid misinformation and know that you're getting the real deal We use Ground News on this show in our research
19:11
process and I think you are going to love it as well So if you want to break out of your bubble and make sure you're
19:17
getting the real story you can get 40% off a membership if you go to groundnews.com/factually Once again
19:22
that's 40% off if you go to groundnews.com/factually Folks let me share a secret with you I'm a very
19:28
private person and that's the only secret I'm going to share with you because again I'm a very private person
19:34
When I'm browsing the internet or working online I don't want anyone hanging over my shoulder breathing their hot swampy breath right into my ear as
19:41
they watch what I'm doing If you want to keep your ears free from that hot and sticky swamp breath you need to get
19:46
yourself a virtual private network And that is why I recommend NordVPN a VPN to
19:52
help mask your IP your location and stop digital swamp breath in its tracks If you've never used a VPN before it does
19:58
not get simpler than NordVPN Whether you use a Mac or a PC an iPhone or an Android you can connect to NordVPN with
20:04
one click or enable autoconnect for zeroclick protection Once you're connected you'll find that you have
20:10
amazing speed and the ability to connect to over 7,400 servers in 118 countries
20:15
Traveling abroad while you can stay connected to your home country to make sure you don't lose access to region locked content on streaming services And
20:21
all of this with the joy of knowing that no one is learing over your shoulder So to get the best discount off your
20:27
NordVPN plan go to nordvpn.com/adam conover Our link will also give you four extra months on the 2-year plan There is
20:34
no risk with Nord's 30-day money back guarantee The link is in the podcast episode description box Check it out I
20:41
mean look there's good things about social media and smartphones but social media in particular Oh it's it's just
20:46
been a way to sell ads and centralize eyeballs It's not like it's it's had some giant purpose At the end of the day
20:52
it's just it's just business connecting the world Right Exactly And but it at
20:58
the end of the day it's just some dumb asses with a lot of money you know just trying to gobble up eyeballs and money just like business people always have
21:04
and doing it in a profoundly disruptive way So you know what would what would be different about this but let's just stay
21:12
on the on the quasi religious piece of it for a second more Like um these folks
21:18
genuinely believe that they are helping to usher in some like a new form of
21:24
intelligence a new a new god or devil Uh then why are they doing it if they're
21:30
afraid of it and how did they convince themselves that that's what they're doing i think there is a very critical
21:36
part of the narrative where if we don't do it somebody else will and that somebody else could be a very very very
21:42
bad actor So the only way to ensure that we
21:49
are going to get to some kind of positive outcome is by doing it ourselves That's such a hubristic thing
21:56
to think Everybody thinks they're a good actor Who's the worst actor like than
22:02
Silicon Valley like I guess they go China sometimes and it's like yeah chi China China's bad in in some ways right
22:09
in many way whatever are you talking about Chinese corporations talking about the government yeah I plenty of criticisms of China but like I also have
22:16
plenty of criticisms of Silicon Valley why should I accept that they're the good guy why do they think they're the good guys because they I I mean this is
22:24
what Silicon Valley runs on right is self-belief Um and you know one of the interesting
22:32
things that I kind of reported on in my book is there are lots of enemies There
22:38
are the bad guy evolves Um there there are China is definitely one that
22:44
reoccurs but within Open AI the origin story of the of the company was they
22:50
were trying to be the ant uh the antithesis to Google So Google at the
22:56
time was the evil corporation that's going to be developing AI with purely for-profit capitalistic motives and we
23:03
need to be the nonprofit that's going to be a bastion of transparency and do AI
23:09
development in service of the public good And Google has continued Google and
23:15
DeepMind have continued to be a very much a competitor and upheld as a um we
23:22
do not want to be this and this is why we are continuing to pursue relentlessly
23:28
pursue this race to win because we need to be before Google And there have been
23:33
others like all of the AI companies now all have sort of different anglings
23:40
where they imagine themselves as the best of the crop So Enthropic also they
23:47
anthropic was founded by a group of ex OpenAI people It was a it was a fision fissure in the original uh group of
23:55
OpenAI leadership where the entropic group then decided we think we can do
24:01
this better and we need to be the ones that create a different vision of what
24:08
AI is to um outmaneuver Open AI We're the good guys they're the bad guys And
24:15
ultimately what's interesting is that even as all of these companies have their own self-defined narratives of
24:22
their self-worth and and value being higher than others they're all pursuing the same thing which is large language
24:30
models scale scale scale and growth at all costs I know that Open AI initially
24:36
started as a nonprofit of some kind Uh you've written that OpenAI has become
24:41
everything that it said it would not be What do you mean by that so OpenAI's original founding story was Sam Alman
24:50
had this idea for an AI research lab He wanted to recruit Elon Musk to join forces with him And Elon Musk at the
24:57
time had a particular uh thing against Google and DeepMind
25:03
Deis Pasabis And so Sam kind of pitched him this idea Why don't we create the anti-
25:10
Google the anti-de mind and we'll counter the way that Hassabis is
25:15
conducting himself with a completely different approach And I touched on this earlier like they then commit to being
25:22
totally transparent open sourcing their research not having any commercial objectives um and and serving this
25:28
higher purpose um what what I ultimately in the book called a civilizing mission
25:34
because I I really think we need to start understanding these companies as as empires
25:40
um of we we are working to ensure that artificial general intelligence will be
25:46
uh will benefit all of humanity and essentially if you look at
25:52
what open AI is today I mean it is so it's done a complete 180 it's a
25:58
for-profit corporate I It is still a nonprofit with a forprofit in nested inside but it is the most capitalistic
26:06
organization that you could point to in Silicon Valley today It it just raised $40 billion which is the largest
26:14
fundraising round of private investment ever in the history of Silicon Valley and put the company at a $300 billion
26:21
valuation which makes it one of the most valuable startups ever Um it's not
26:26
something nonprofits normally do Yeah Right and it it and it doesn't have uh it doesn't release research anymore And
26:33
in fact a lot of what it did through the course of its history was essentially
26:40
re-establish new norms within the entire industry the entire AI field to stop releasing meaningful technical details
26:46
about AI systems at all So not only are they not they not transparent themselves they had turned the rest of the field
26:54
and the industry towards totally opaque norms Um and that you know they are
27:02
pursuing commercial objective They are relentlessly releasing new products trying to growth hack to get more and
27:10
more users as an opening eye source very recently told me Um and they are
27:16
basically the most Silicon Valley of Silicon Valley companies now even though
27:22
they originally portrayed themselves as the opposite Wait so you say they're growth hacking to increase their users I
27:28
remember when Chat GPT came out it was supposedly one of the biggest product launches in tech industry history in
27:33
terms of how many people used it And of course the story has to be of incredible
27:38
growth of uh you know uptake of of AI usage um if they they want to keep
27:45
getting the investment why would they have to growth hack in order to show
27:50
growth if the product is so transformative it's a it's a great question You know Facebook also did the
27:57
same thing They also said that their product was incredibly transformative but they also they I I mean they practically invented growth hacking as a
28:04
company um by creating a growth team and turning it into the core of of the
28:10
company And that became a model in all Silicon Valley companies where all startups now have growth teams And that
28:17
is a really important part of showing investors hockey stick numbers They want to keep showing this rapid rise in the
28:24
number of users that are signing on to the platform Alman said he had a testimonial in the Senate um just a
28:31
couple weeks ago and I think he said that there were 300 million active users on uh OpenAI chatbt today
28:40
that is still compared to other internet giants low Yeah Um in absolute numbers
28:49
and also in Altman's mind And so um you know the Miyazaki stunt that they pulled
28:57
sure that afterwards opening I was super pleased that they were able to get a
29:02
million new users from the or it might have been more than that but they were able to get a ton of new users from that
29:09
particular feature that they added And the feature that let you create a selfie
29:15
that looked like a Studio Gibbli movie Exactly Yeah This was their big accomplishment I mean I saw those
29:21
selfies but I'm like who gives a [ __ ] like that's not a transformative product right it's just like that I there have
29:29
been little fads like that for the past couple of years even before chat GPT Oh I did the watercolor AI of my face Like
29:36
it Well you know what's so interesting i was I was in Europe and I was just I
29:42
whenever I'm traveling I always will randomly ask people oh have you heard of Open AI have you heard of Chat GBT have
29:48
you heard of AI like what are your thoughts on it and I spoke to a woman who was like I hadn't heard of it until
29:55
recently where I realized I could make a cartoon of myself and that's super cool
30:01
And you know it's it's a really effective tactic for getting more users
30:06
and getting them engaged um and reaching people that they haven't reached yet Okay fair It's like a big wide funnel
30:12
and then maybe those people say "Okay now help me cheat on my math test." or
30:18
what after after they do the the Miyazaki art Um but it also highlights a major criticism of this technology that
30:25
a lot of people have is especially artists feel that it's institutionalized theft and the fact that their big their
30:32
biggest sort of news moment of the past couple months was you know lifting the
30:38
style of one of the most famous artists in the world Yeah in an unauthorized fashion I assume Hayamiaki is not
30:45
receiving a couple pennies every time someone makes themselves look like Po despite the fact that it is trained on
30:52
his work Um I thought it was an odd stunt for that reason because it highlights one of the main moral
30:58
objections that people have to this technology which is which is that it's built on the back of all of humanity in
31:05
a way that is we are not being compensated for Absolutely It is weird and I do think it kind of signals a a
31:11
phase shift in how open AAI is now engaging Um there was a period in which
31:16
I think they were more cautious about trying to portray themselves as
31:21
listening attentive um democratic in the the way that they
31:29
were receiving feedback and adjusting themselves And I think they have now moved to a different era where they are
31:37
just running and racing and they're not as um concerned anymore
31:43
about the ripple the negative ripple effects it can cause if it also allows them to do what they need which is they
31:50
need to monetize They are losing massive amounts of money They are raising
31:55
massive amounts of capital They need to figure out how those trains are not gonna crash Yeah Um and they and and so
32:04
I think that is yeah the Miyazaki thing is definitely an an ex exemplifies this
32:12
pressure that the the priorities that they have now as an organization How do they plan to
32:18
monetize well it's interesting that they recently hired a new um CEO of
32:24
applications Fiji Simo and she has a career where she has been uh she has a
32:33
lot of experience with advertising Um Altman has indicated publicly that they are they they need to
32:40
figure out a plan for monetizing the free tier of users So I think they're going to go the way of all Silicon
32:47
Valley companies um when they start looking for some kind of cash cow is advertising
32:54
um advertising off of the data that they're collecting And um you know I I I
32:59
was speaking to another opening eye source at one point who mentioned that
33:04
one of the best business models that still has not been superseded within the
33:10
valley is search And what he meant was advertising like being able to get users
33:18
information in exchange for getting their information to then package out to
33:24
the people with the money And so that is absolutely one thing that they're exploring They're they're also exploring
33:30
subscriptions but you know the price tags that they're putting on these subscriptions now hundreds of dollars a month they're considering thousands of
33:36
dollars a month is not going to be appealing to the average user So they have to balance it with also the
33:43
majority of users how they're going to monetize them for for free But uh a business model where the they're
33:50
imagining people are going to go to chat GPT to ask questions and chat GP is going to give answers and then also
33:57
serve ads that are based on the previous questions That is Google right it it it's Google with a different uh style of
34:04
delivering the answer and with a different sort of database because it's based on a large language model rather than like searching the internet but um
34:12
that's so okay they might supplant Google's a great big company one of the largest in the country it's not
34:19
transforming the entire global economy and replacing humanity it's just like
34:25
okay so their aspiration is to be Google that doesn't sound as big as what they are describing to me yeah absolutely I I
34:31
think there has always been a divergence between what they say and what they're doing Yeah And it has reached a new
34:38
level now that money is a much more pressing topic issue that they need to
34:45
address urgently What do you mean when you say we want to understand these companies as empires
34:51
so what I write about in the book is when you think about the very long history of European colonialism and the
34:58
way that empires of old operated there were sort of there were
35:03
there were several different features for empires of old One was they laid claim to resources that were not their
35:09
own and they designed rules that made it seem like they were their resources you know the Spanish cookie stores showed up
35:16
in the Americas and were like "Actually based on our laws we own this land and
35:22
these minerals and these resources." Um they would exploit a lot of labor around
35:27
the world meaning they didn't pay workers They or they paid them very little to continue building up and
35:33
fortifying the empire They competed with one another So there was this aggressive
35:38
race of we the French Empire are better than the British Empire We the British
35:44
Empire are better than the Dutch Empire And we need to continue to relentlessly
35:49
race and be number one because we're the ones that have the right civilizing mission to bring modernity and progress
35:56
to all of humanity Ah that is we are we are morally superior
36:02
Exactly M and that is literally what is
36:07
happening now with AI companies where they extract a lot of resources They claim to a lot of resources that are not
36:13
their own but they're trying to position it such that it seems like it's their
36:19
own This you know example they're trying to make it sound like copyright laws
36:24
allow them to have fair use of a artist and writers and creators work to train
36:31
their models on But ultimately those models are creating sub very very
36:36
effective substitutes um in so far as it's taking economic opportunity away from those same artists
36:42
writers and creators Now they are exploiting a lot of labor both in terms of the labor that they're contracting to
36:50
do all of the labeling and cleaning of the data before it goes into the models and also in the fact that they are
36:56
ultimately building labor replacing technologies OpenAI's definition of AGI
37:02
is uh highly autonomous systems that outperform most humans outperform humans
37:08
at most economically valuable work So they're building systems that will ultimately make it much harder for
37:14
workers to bargain for better rights when they're at the bargaining table And they're doing this in a race where they
37:22
they position themselves as morally superior to the other bad actors that
37:27
they need to beat and they have this civilizing mission If you if you join us
37:33
and allow us to do this if you give us all of the resources all of the capital
37:38
and just close your eyes to the enormous environmental social and labor impacts
37:43
all around the world we will eventually bring modernity and progress to all of humanity And one of the things that I
37:50
mentioned in the book is there is um you know empires of old were deeply deeply
37:57
violent and we don't see this kind of overt violence with empires of AI today
38:02
but we also have to remember that modern day empires are going to look different because we've had 150 years of human
38:08
rights progress and we've social norms have shifted and so what we need to recognize is the
38:14
template evolved into present day and all of the features of empire building
38:20
are there Um and one of the analogies that I I've started
38:26
increasingly um using that I didn't originally put in the book but if you think about the uh British East India
38:33
Company it originally started as a company that was doing mutually beneficial economic agreements with
38:39
India And at some point an inflection point happened where the company realized that they could start acting
38:46
completely in their self-interest with no consequences And that is when it dramatically evolved into an imperial
38:53
power and then eventually was um became a state asset and and the British Empire
38:59
the crown then turned India into an official colony And we are seeing that
39:05
play out in real time where OpenAI and all of these empires of AI they are gaining so much economic and political
39:11
leverage in the US and around the world And they are so aligned and backed by
39:16
the Trump administration now that they have reached a point I think they have reached a point where they basically can
39:22
act in their self-interest with no material consequence to themselves anymore And this is just a if we allow
39:33
this to continue I think it can be profoundly devastating I mean what an incredible comparison uh between open AI
39:42
and the East India companies and it one of the things that strikes me is how it leverages
39:47
uh the public hatred for Silicon Valley companies You know 15 years ago we all loved these companies They were like
39:53
bright shining uh beacons in the American economy They were so warm and fuzzy And then gradually we start to go
39:59
"Ah Google's kind of [ __ ] me Apple I'm kind of pissed off at them." And oh these are just these they're the new
40:04
Wall Street right the the the public discontent is growing Um and so these
40:10
companies have sort of adopted some of that language and sentiment say "Yeah yeah they're all corrupt except for us
40:15
We're the good one We're the one who's going to save you from the bad ones." And they're all doing that Like Anthropic says is about Open AI etc etc
40:23
Yeah Um but it's a it's a tactic to gain power for themselves Yeah Exactly And
40:29
the public discontent that has been rising over the last decade really is
40:35
based on the fact that people feel like they're losing control and agency over their lives Mhm And there's a reason for
40:41
that is because these companies are gaining more control and agency over your life Yeah they are taking your data
40:48
and most people feel like they have they there's nothing they can do about it You know they they just enter this um
40:55
nihilism where they're like "Well we don't have any privacy anyway so whatever." But they're left with this
41:02
feeling of a lack of control a lack of self-determination And that is
41:09
ultimately what I I I really hope that readers can take away from the book is
41:15
this is an a continuation evolution and the most extreme version of what we've
41:21
ever seen before in the way that Silicon Valley has eroded away our individual
41:27
and institutional um foundations for self-determination
41:32
Yeah when you talk about these companies as empires that are extracting resources
41:38
you know I was just in uh Amsterdam on tour and uh I I went to a couple museums
41:45
and it was just very apparent to me Amsterdam as this like physical manifestation of Dutch empire right that
41:51
like I went I went to the Reichkes Museum and they just had one or two paintings about like Dutch colonies
41:56
They're like "Oh here are the indigenous people of Java like planting sugarcane."
42:02
I went there in October last year Yeah Oh amazing And there's like literally like one or two paintings right in the
42:07
whole museum The rest of it is like here's a here's a beautiful you know Dutch This is worth $10 million Here's
42:13
the super sophisticated mapping technology that we developed and the compass and and and navigation
42:20
technology we developed and Yeah But then there's this little acknowledgement that because they know but they can't
42:26
really acknowledge fully ah this was all extractive right and so is in the city going man they extracted wealth and
42:32
labor and blood from countries from other civilizations around the world They turned it into this physically
42:38
gorgeous city It's a what what a wonder everyone goes to Amsterdam says what a beautiful place But it was it was taken
42:44
from other places right and accreted there and now it's just it's been there for you know a couple hundred years at
42:50
this point Um so we're familiar with that kind of extraction when when with when with when with when with when with when with when with when with when with when with when with this type of empire
42:56
who are they extracting from and what are they extracting they're extracting from everyone Um they're extracting data
43:04
from everyone but also they're extracting actual physical minerals from the earth as well Because in order to
43:10
train these colossal AI models which is not an inevitable manifestation of AI it
43:16
is very much a choice that Silicon Valley made to build models that manifest the growth at all costs
43:22
mentality that they have They need an enormous amount of computational
43:28
infrastructure which data centers and supercomputers and that is built on
43:33
minerals that come from somewhere And so part of the book I ended up going to Chile to the Atakama desert where um it
43:42
has long dealt with all kinds of extraction but that extraction has really accelerated because of two things
43:49
because of the um electric car revolution The Atakama desert has a lot of lithium and because of AI um they
43:58
have a lot of copper and lithium is also needed in data centers and there are indigenous peoples there that are
44:04
literally being dis displaced and literally experiencing colonialism right now It is not a thing of the past for
44:10
them They are having their lands taken They are having their economic opportunity taken They're having um
44:15
their spiritual grounds taken the place where they um they engage in in their
44:21
connection with the earth And um they said to me when I was interviewing the
44:27
indigenous communities there we have always always always been told these
44:33
ideas about this will bring everyone into the future This extraction this
44:40
hollowing out of our lands is going to bring everyone into the future And they're like "Are you sure it's
44:46
everyone?" Like who is this bringing into the future because this is h hurtling us to back backwards in time
44:54
where we have less rights less resources less economic opportunity than ever before If you're paying rent and you're
45:01
not getting rewarded for it you're missing out Built lets you earn points just for paying rent Points you can actually put towards travel dining and
45:08
more There's no cost to join And just by paying rent you unlock flexible points that can be transferred to your favorite
45:13
hotels and airlines a future rent payment your next lift ride and more When you pay rent through Built you
45:18
unlock two powerful benefits First you earn one of the industry's most valuable points on rent every month No matter
45:24
where you live or who your landlord is your rent now works for you And second you gain access to exclusive
45:29
neighborhood benefits in your city Built neighborhood benefits are things like extra points on dining out complimentary
45:35
post-workout shakes free mats or towels at your favorite fitness studios and unique experiences that only Built
45:41
members can access And when you're ready to travel Built Points can be converted to your favorite miles and hotel points
45:47
around the world meaning your rent can literally take you places So if you're not earning points on rent my question
45:52
is what are you waiting for start paying rent through Built and take advantage of your neighborhood benefits by going to
45:58
join.com/factually That's j o i n b i lt.com/factually Make sure to use our
46:05
URL so they know we sent you Join built.com/factually to sign up for Built today This episode of Factually is
46:11
brought to you by Alma Do you get the feeling that life is just the brief moments that happen between social media doomcrolling sessions you know
46:18
personally I've had the feeling on occasion that my life is just some kind of cruel perpetual motion machine that
46:23
takes in a human experience and outputs weapons grade anxiety It's in times like this that I've realized that nothing
46:29
nothing is more important than meaningful human connections That's why if you're seeking some help in dark times I recommend looking at Alma They
46:36
make it easy to connect with an experienced therapist a real person who can listen understand and support you
46:41
through whatever challenges you're facing I can tell you firsthand how much finding my therapist who understands me
46:47
actually helped me on my journey of mental health And you can find your person too With Alma you can browse
46:52
their online directory and filter by what matters most to you And then you can book free 15minute consultations
46:58
with therapists you're interested in Unlike other online therapy platforms just match you with whoever is available
47:04
Alma lets you choose someone you truly connect with because the right fit makes all the difference With their help you
47:09
can start seeing real improvements in your mental health Better with people better with Alma Visit
47:15
helloalma.com/factually to get started and schedule a free consultation today That's
47:21
hellma.com/factually And tell me about the piece where open AI is becoming allied with the US government cuz that's
47:28
another really strong comparison to these colonial empire companies of the past Uh when did that start happening
47:36
was it specifically with the Trump administration and and how has Sam Alvin made that happen i think the most
47:41
symbolic moment happened on day two of the Trump administration when President Trump stood in front of an audience at a
47:49
podium next to Sam Alman and announced the $500 billion Stargate initiative So this is an initiative that's going to
47:55
invest uh it's private investment 500 billion into building compute
48:00
infrastructure and opening has said that this is for it like it alone itself
48:07
alone and that was a very very strategic
48:12
and clever move by Alman because at the time what was happening was openi was in a bit of a fragile position where it was
48:20
being sued left and right by lots of different groups and most importantly by Elon Musk uh original co-founder that
48:28
then got snubbed and uh has given a lot of grief to open in the recent year Um
48:36
and Elon Musk had also bet on the right horse and had gotten himself elevated
48:41
into an extremely prominent position in the administration Do the head of the department of whatever
48:49
doge what was he Doge what is that he's Elon Musk is in the government I just I
48:54
guess I was asleep I somehow we all blacked out I don't think I made a dozen videos about that so far this year Uh
49:01
sorry Go on please Yes Um and so open was in this position of oh the the man
49:07
that wants us to not do what we do is now extremely
49:14
powerful And so what Altman did was he started
49:20
negotiating behind uh closed doors to get himself into basically the same
49:26
position The one person at the time that could protect him from Musk was Trump M
49:32
so if he allies himself with the president by striking up this thing of you take
49:40
credit for your administration bringing in $500 billion of investment for
49:46
computational infrastructure that is going to keep America first in the AI
49:52
race You take credit for that and then in exchange Altman got a shield Um and
49:58
so I think that is one of the most symbolic moments in in how OpenAI has
50:05
allied itself with you could argue maybe the only power
50:11
that was higher than Silicon Valley um in that moment the US government because
50:18
Silicon Valley has more power than basically every other government in the world now Um and Trump the Trump
50:26
administration has been all in since then in declaring we don't want anyone
50:33
to talk about regulation You know literally just this past week um
50:38
Republicans tried to slide in a specific line within a tax bill that they're
50:43
trying to pass that says uh that that proposes to block all state regulation on AI for 10 years Yeah
50:51
So the Trump administration is doing is pulling out all the stops to try and
50:57
make it as frictionless as possible for these AI companies to relentlessly drive
51:03
forward And in fact they're uh they're really putting AI into the government
51:08
itself A big part of Elon's Doge initiative but also you see it echoed all different parts and Republican
51:15
administrations of of the the country and the state is and the various states is we're going to fire all the
51:21
government workers We're going to replace them all with AI Um it it's interesting to see the government be the
51:28
first place that is really affirmatively trying to do this whether or not it
51:33
works What do you make of that what a great way to turn what was public
51:40
infrastructure into private infrastructure What were public workers that earned public money and
51:47
operationalized what elected officials uh determine needs to be done and turn
51:55
it into just automated AI systems that are taking all of the public data
52:01
government data private citizens data and funneling it through company servers
52:06
right to do supposedly the same thing
52:11
but not really because these systems break down a lot right well and their output is unpredictable and they have
52:19
weird hallucinations and everything else and you know maybe you fire a bunch of
52:25
IRS workers and run everyone's tax returns through AI and suddenly it starts putting white genocide into onto
52:31
everyone's tax forms You know like what that's you know what that's its own story and that might be more of an Elon story than an AI story No but no I I
52:39
think it is a very effective you know I think that moment was a great way to
52:45
highlight the fact that we don't have any checks on these companies and how
52:51
they are going to design their AI models and what kinds of values they use this vehicle to ferry out into the world
52:58
right and usually it's not so overt In this case it was and it and it and it
53:03
really showed what's actually under foot But that that's a problem It usually is
53:09
much more subtle But OpenAI has said you know when the Trump and when President Trump came into power they said we are
53:16
going to start relaxing like we don't we don't want to be so heavy-handed in content moderation You know that that's
53:23
a political choice They are trying to in more ways than one align themselves with
53:28
Trump by making sure that their technologies are not going to spark the
53:35
eye of the president and um are shifting with the political winds of who's in
53:40
power Even though though that it's really dedicated to Trump that they've they've aligned themselves with Trump so
53:47
much of American society and business and uh punditry has aligned themselves
53:54
with AI has has swallowed it I'm thinking about uh you know I interviewed a couple weeks back Ezra Klein and Derek
54:00
Thompson on the show about their book Abundance Uh lot of lot of good things about the they make a lot of arguments
54:05
in the book some of which I agree with There is a page or two in the book where they're talking about the important
54:11
importance of government investment in science generally Certainly agree with that Government is like there's so many
54:17
amazing innovations we never would have had if the government had invested in the basic research And then in the course of that argument they say well AI
54:23
is the next big thing and the government could invest billions and billions of dollars into AI data centers and make
54:29
sure that America has a lead in AI because that's you know where everything is going And I got to that part and I
54:36
was like this is just you you know have you been listening to a lot of Sam Alman podcast interviews You know what I mean like this is you're basically that that
54:43
to me sounds like a handout to these Imperial companies as you say Um do you
54:50
view it that way and why have they been so successful even as they're aligning themselves with Trump who has very
54:56
little in common with Klein Thompson in terms of his objectives uh you know uh
55:01
liberals have also started espousing uh the same argument Are are have they fallen for a bill of goods so I agree
55:08
with Ezra Klein and Derek Thompson on the first part that AI will be the next big thing
55:15
The where I disagree is what kind of AI are we talking about and the kind of AI that I'm talking about doesn't actually
55:21
need massive amounts of data centers and computing infrastructure AI has been around for a long time There are many different types
55:26
of technologies that are actually named AI and the things that I think can be
55:33
transformative are smaller task specific deep learning models or other for maybe
55:40
non-deep learning uh AI systems that are uh attack specific problems that we need
55:47
that are also greatly that that lends themselves to the strengths of AI So an example um is alpha fold like deep mind
55:55
created alpha fold to solve a little bit in quotations the protein folding
56:00
problem that is a very that has nothing to do with large language models It has nothing to do with growth at all costs
56:06
mentality It was a very specific problem Let's try and do this extremely
56:11
computationally intensive task that we previously didn't have the uh computational um uh software
56:19
for and unlock a a a lots of different
56:24
types of uh potent potential new resources um for scientists to do drug
56:29
discovery um and other kinds of really interesting work Um I I'm also talking
56:36
about AI like AI that can help integrate um more renewables into the grid This is
56:41
something that we really desperately need to do We need to continue transitioning our economy to a clean
56:48
energy economy And one of the um challenges of doing that is renewable
56:55
energy is a very difficult to predict source Sometimes the sun shines sometimes the wind blows and sometimes
57:00
they don't And in order to more effectively have more of that capacity
57:05
in the grid there need to be better predictive AI systems that are uh
57:11
figuring out what the generation capacity will be in the short-term future and then optimizing who gets what
57:17
energy Um and that is optimization problems are incredibly
57:23
um AI AI systems are incredibly effective at solving optimization problems And so there's all of these
57:29
interesting problems in society that AI does naturally lend itself to But I
57:35
think the way that we can get broadbased benefit from AI technologies is by
57:41
unwinding this scale and growth at all cost mentality back towards let's figure
57:47
out what are the specific problems that we need that are um that are sort of the
57:52
lynchpin issue that we need to crack that also AI is good at cracking and
57:58
then develop well scoped AI systems to tackle that very specific
58:04
problem and that can be I think hugely transformative but that is absolutely
58:09
not what we're doing right now Yeah I mean you described a few problems there Protein folding was an existing problem in biology uh that I remember reading
58:16
about at least over a decade ago Uh there are various like distributed computing projects you could join and
58:22
like devote some of your CPU cycles to like folding protein and like help out science right and so if sure a a
58:31
algorithm that we might call AI is good at solving that that's great That's a great advancement Um why then are these
58:40
companies perhaps you've already answered this question but but I'd love to hear you just talk about it again Why
58:45
did these companies not take that strategy right why is it massive growth
58:51
at all costs we need more compute We're going for AGI It's sort of this giant
58:57
blob approach It's going to transform everything It's going to do everything
59:02
Therefore we need everything And nothing must stand in our way I always say that
59:07
it's a result of three things Money power and ideology
59:13
If you take this approach you get to accumulate enormous amounts of money and enormous amounts of power and enormous
59:20
amounts of political and economic leverage And there is this deeper driving force
59:26
as we talked about this quasi religious force behind the whole thing where there
59:31
are people who genuinely believe that they are building God or the devil and
59:38
that is th that constellation of things
59:43
leads to basically really poor decision making
59:48
Yeah where it really is all consuming This kind of effort to advance advance
59:56
advance and grow and grow and grow and consume and consume and consume without recognition of what's happening
1:00:05
in the present with all of the externalities that that cause causes Yeah it seems to be optimized for growth
1:00:13
rather than any kind of understanding of human society or humanity I'd actually
1:00:18
love your take on this I I don't know if you saw over the last couple weeks I I was in a little internet firestorm of
1:00:24
myself because I did a promoted video for one of other Sam Sam Alman's other companies called World Um I eventually
1:00:30
canceled the gig and and turned down the the cash and I have an a video about it coming out It'll probably be out by the
1:00:35
time this interview airs right now I'm working on it as we're speaking But you know this is Sam Alman's company where
1:00:40
uh there's an orb that you gaze into and it proves that you're a human supposedly
1:00:46
and then uh you can use that to log into stuff It's also a crypto wallet and it's also like an everything app right where
1:00:52
you can chat and you can do like everything else you might want to do on the internet with the app Yeah I went to their keynote and I I felt I don't even
1:00:59
know how to explain this to a person right i don't know what the pledge is to a user I don't know why someone would
1:01:04
sign up for this It's like written the entire thing seems to be
1:01:09
uh created on this level where it's just meant to give market get Mark Andre to
1:01:14
give them give them a couple more billion dollars every year right um like the pitch is to investors or to some
1:01:20
sort of hazy notion of the future rather than to the public itself Like it's people it's been made by people who have
1:01:27
not like talked to another human being in in a couple of years Um I'm curious if you you share that view Like are
1:01:34
these people completely detached from human society yes And also to your
1:01:40
question of who would sign up for this i was just in Indonesia and Indonesia gave
1:01:46
Sam Alman their very first gold visa uh which investment visa that they give but they also give it based on other
1:01:51
criteria So it's not clear if Alman actually invested And I was talking with a bunch of civil society folks and
1:01:57
journalists in Indonesia about this and their number one concern was world and
1:02:03
they said this company is coming in and it doesn't matter what the premise is
1:02:10
People are lining up out the door because all they have to do is give up their biometric data for $50 $50 US And
1:02:18
in Indonesia that is a huge deal And that happened in Kenya that happened in many many other countries where that US
1:02:27
dollar cash that they don't they don't need to know what it's for and I think
1:02:33
this is what's so dangerous and and also what I try to highlight in the book is like there there's so many conversations
1:02:40
that sometimes we have in the US where we just think about these technologies in the context of the US which is
1:02:46
ultimately one of the wealthiest countries in the world and even the poorest people in our country are of a I
1:02:55
mean they're not well off but compared to the poorest people in the poorest countries like there is still a certain
1:03:01
level of of a floor there and to really understand how these technologies how
1:03:08
these visions that Alman or anyone else has has developed you cannot just
1:03:14
understand it within the US context and certainly not within just a Silicon Valley context you have to go to these
1:03:19
most vulnerable populations in the world to What happens and with
1:03:25
world is all of these extremely poor poor people were willing to just give
1:03:33
away their rights for a tiny morsel of cash Yeah And we see that with the
1:03:38
impact that AI is having all around the world as well with the labor exploitation piece I mean these
1:03:44
companies when they contract workers to work on these technologies to clean the data the data and do content moderation
1:03:52
um in the same vein as as content moderation in social media era They are willing to do psychologically
1:03:59
traumatizing work for pennies because that is that is the thing that
1:04:06
will allow them to for just a day put food on the table for their kids And so
1:04:12
that is like when we talk about I I think it is OpenAI's mission as much as I
1:04:20
criticize it is a noble one that could be taken seriously The idea that you
1:04:25
could develop technology for the benefit of all humanity should be taken seriously We should be doing that That
1:04:31
is what I would define as genuine progress in society if we can lift all boats and not just continue to only lift
1:04:38
the ceiling and the floor continues to bottom out Um and the only way to truly understand how we might be able to get
1:04:45
there is to go to these places where the floor is bottoming out right now and to understand why and correct for that Yeah
1:04:53
And your point is well taken that that is where those companies are going that is where uh they are thinking globally
1:05:02
and we very rarely do in the United States We rarely think about the existence of those countries um and the
1:05:09
people who live in them and what their lives are like um and you know the fact that they're the vast majority of of
1:05:14
lives on earth Um and but people like Sam Alman are thinking about those
1:05:20
places and how they can extract from them and how they can exploit them in order to create a a an empire for themselves And that's
1:05:28
what makes it a colonial empire You're really painting that picture really vividly Yeah absolutely I I don't think
1:05:34
you can really start to understand this the full scope of the empire and the
1:05:39
colonial nature of it until you travel to places that are the farthest flung from Silicon Valley Well I think the
1:05:47
problem facing us then is look I I think critics of AI have a problem which is
1:05:54
that this industry is so ma it is so massive It has accreted so much power
1:06:01
unto itself It is so driving the conversation every moment of the day
1:06:06
that sometimes when you write about it or talk about it like I do you feel like
1:06:12
you're still just a passenger on the train You feel like you're still almost contributing to it because you are
1:06:18
having the conversation that they are determining Um you said earlier if we don't stop it if
1:06:25
we don't you know if we don't think about what they're doing if we if we let them do this right yeah
1:06:31
Um when and that that stuck with me because I'm like they they have so much power How can we how can we stop them
1:06:38
how when it feels like even the very terms of our conversation about what they're doing are dependent on their
1:06:45
actions Um so how how do we think about that and how do we begin to make progress in the face of that well first
1:06:51
of all I think you're articulating something that is also central to empire building is empires make you feel like
1:06:57
they're inevitable right but throughout history every empire has fallen And it
1:07:05
comes to the fact that every empire as as much as they feel inevitable also do
1:07:11
have weak foundations in the sense that they need to consume so much in order to
1:07:17
continue to continue that when there starts to be resistance on all of the
1:07:23
things they need to feed on to fortify the empire and perpetuate the empire it starts to crumble And so the way that I
1:07:31
think about it is there's a supply chain for AI development These companies need
1:07:36
a lot of data They need a lot of computational resources And if you are to chip away at
1:07:44
each of these they will eventually need they will be forced to go a different
1:07:49
direction and and not continue this all-consuming um path of AI development And so with
1:07:57
with data you know we were we're already seeing lots of movements of artists and writers starting to sue these companies
1:08:04
saying we need to figure out a much better way to either get compensation
1:08:10
and credit or to not have this in your training data sets at all We we've also seen the way that artists have used um
1:08:16
tools like glaze and um nightshade which is a thing that you can you can use to
1:08:22
to um add a bit of a filter that the human eye can't see on your artwork when you put it online in a portfolio But
1:08:29
when the AI model tries to train on it it starts to break down the AI model So there's all of these forms of protest
1:08:35
that are bubbling up And with labor we're seeing Kenyon workers rising up
1:08:41
and protesting their working conditions and creating an international conversation around labor norms and
1:08:48
trying to um actually guarantee them better wages better working conditions
1:08:53
We're seeing Hollywood writers rise up and and demand certain stipulations to how AI can be used how they how whether
1:09:01
or not their work can be trained on it Um and we're seeing lots and lots of communities also rise up to demand more
1:09:10
transparency around data centers that enter their communities and um have ground rules around what kind of
1:09:17
resources they can take whether it's energy or water or whether the data center should be there at all And so if
1:09:23
we can all just remember that we actually do have agency in this
1:09:28
situation like if you are a um you know a parent of a kid and you go to your
1:09:34
school like you can go to your kid's school and ask them what is their AI policy and can you actually create a
1:09:40
coalition of parents to talk about what the AI policy should be and contest um
1:09:47
whether or not AI tools should be in the classroom or what are the gu guard rails around when they should be deployed You
1:09:52
can go to your doctor's office ask them the same questions about whether or not you want AI to be used in your
1:09:59
healthcare journey Um and if we just remember that we have agency in all of these things and we continue to assert
1:10:05
what we want out of this technology and what the ground rules are for how it
1:10:11
impacts us and our lives I think we will get to a much much better future I love
1:10:18
that vision I I think it's also you highlight though how big of a battle it is Uh absolutely because we are you have
1:10:26
convinced me that it is empire that we are up against and you know the the
1:10:31
battles against the empires of the past took a couple hundred years right empires take a while to fall and these
1:10:36
are just getting going and uh you know we don't live in Star Wars right where
1:10:42
where it opens with the with the rebel alliance winning right um it uh we we We
1:10:49
live in a world where you know it could be a more grinding battle than that but we have no choice but to but to fight
1:10:54
And I think I I love your emphasis on our agency that so often we have this
1:11:02
tendency to roll over for these people and just accept the premises of what they say and what we have to do and oh
1:11:08
well it's coming so might as well get with it um and start start using this Might as well build the data center
1:11:14
because we got no choice Um and just the process of questioning these people uh
1:11:19
is really so important And and I by the way I think it's brave for you to do so when you're a reporter who speaks to so
1:11:26
many of them for you to take this tack because so many uh reporters in your position end up you know exceeding to
1:11:32
their uh uh to to their framework right because they they want the access and they want to be able to continue writing
1:11:38
about it and they sort of go uh go native as it were Um and so the fact that you've remained a critical voice
1:11:44
while doing the the incredible high level reporting you do is is really wonderful and I thank you for doing it Thank you Thank you I mean I've had a
1:11:50
lot of mentors along the way that have reminded me that ultimately your purpose is to serve the public Yeah And to speak
1:11:56
truth to power And so that is what I've tried to do consistently through my career And to your point about empires
1:12:03
taking hundreds of years to fall I mean they also originally took hundreds of years to create But we are in a
1:12:09
different time when I think the rise and fall of empires is going to accelerate
1:12:14
And we also in the past did not there was no democracy before There was no
1:12:21
taste of what was the alternative to empire We are now at a point in our
1:12:28
progression as a human race where we understand that there are other forms of governance and that we do not need to
1:12:33
capitulate to people who paint themselves as superior Well I I can't thank you enough for coming on to spread
1:12:40
the message with us and just uh tell us about your incredible reporting The name of the book is Empire of AI Folks can
1:12:46
pick up a copy at our special bookshop fact.com/books Where else can they find it and where can they find your uh
1:12:52
writing and work on the internet can I uh am a freelancer now So the best way to find me is on my LinkedIn or my
1:12:59
social media Blue Sky Twitter um and through my website Karen Dhow.com Karen
1:13:05
thank you so much for coming on the show and I can't wait to have you back Thank you so much Adam My god Thank you once
1:13:10
again to Karen for coming on the show She's such an incredible guest If you want to pick up a copy of her book once again that URL
1:13:17
factually.com/books Every book you buy there supports not just this show but your local bookstore as well If you'd
1:13:22
like to support the show directly patreon.com/adamconover Five bucks a month gets you every interview ad free
1:13:28
for 15 bucks a month I will put your name in the credits of the show and read it right now Uh this week I want to
1:13:34
thank Aaron Harmony Joseph Mode Rodney Patnham Greg0692 Marcela Johnson Matthew
1:13:39
Burlesen aka the Bunkmeister Kelly Noak Anthony Janet Barlay David Sears VG
1:13:45
Tanky Damen Frank Matthew Robert Miller Griffin Myers and oh no not again If you'd like me to read your name or silly
1:13:50
username on the show once again patreon.com/adamconver Of course you can find all my tour dates at adamconver.net
1:13:56
I want to thank my producers Sam Rodman and Tony Wilson Everybody here at Headgum for making the show possible Thank you so so much for listening and
1:14:02
I'll see you next time on Factually That was a Headgum podcast