ChatGPT / OpenAI / xAI
ーーー
カレン・ハオ(Karen Hao)

+ ニュースサーチ〔カレン・ハオ〕





0:00
Alman is a once- in a generation storytelling talent.
0:06
People who work with him either say that he is the greatest tech leader or they say that he's a liar, manipulator,
0:13
abuser. They were highly secretive as an organization riddled with clashes
0:19
between egos and ideologies. Japan is very vulnerable to both the resource exploitation and the the labor
0:25
exploitation. Empires of AI exploit labor in the process of producing their technology.
0:32
[Music]
0:53
Open AI. [Music]
1:06
Empire
1:22
of AI.
1:44
Okay, so joining us here, Karen How, the author of Empire of AI. It's great to
1:49
have you here. Thank you so much for having me. Yeah. So, you're joining from Hong Kong
1:55
today. That's right. So, are you based in Hong Kong right now? Not the US anymore?
2:02
Yeah, that's right. I'm currently based in Hong Kong. Okay. So, congrats on the book. And
2:10
before we get into the details, um it's been a month since you published. So,
2:16
what were the reactions to your book so far? Yeah, it's been I've been quite grateful
2:23
that it's been a largely positive reception and I think people are really
2:28
excited about both the reporting that's in the book telling the inside story of the company OpenAI as well as the
2:35
argument that I make that we need to understand companies like OpenAI as new forms of empire. There are some people
2:43
who don't take to the argument but still really appreciate the reporting but um it has been I've been really grateful
2:50
that um it's the reception that I had hoped for. Okay. Oh yeah that's great to hear that.
2:57
So are you also talking to Japanese publishers to bring it in here? Yeah I have it was actually the Japanese
3:04
edition was the first foreign edition that we signed. So it'll be publishing early next year.
3:09
Oh early next year. Okay, I look forward to seeing it. So, um, today I would like to divide the
3:16
interview into three segments and the one is the core of empire of AI and open
3:23
open AI's transformation. So, I'm going to ask how you got into AI reporting and
3:29
covering open AI and have you seen the company changing over time? So that's
3:35
first and second one is the the hidden cost of the AI empire. So um ask you
3:41
walk us through what you wrote about the cost behind the large AI models. So
3:47
third one is a call for AI governance. Uh that's also what you wrote uh the
3:53
importance of AI governance with a a wider con concentration of the power in
3:59
AI is happening. So we'll dive deeper on that.
4:04
That sounds great. Yeah. So the first part the core of empire of AI. Um the first you want to
4:11
ask I I want to ask you that you earned a degree of mechanical engineering at MIT but you got into journalism and
4:19
especially AI reporting. So and you're one of the reporters who has been
4:24
covering open AAI from the very early days. So could you quickly walk us
4:30
through how all that happened? Yeah. Yeah. So I I studied mechanical
4:37
engineering because I was very interested in understanding how to use
4:42
technology as a tool for social change. At the time I was quite interested in environmental issues and I thought if we
4:49
could develop products technology products that facilitate consumer shifts in consumer behavior that would be one
4:56
of the ways to mitigate the climate crisis. But quickly after graduating, I
5:02
went to work in Silicon Valley, worked at a startup with a sustainability mission. And I realized that the Silicon
5:09
Valley model of innovation is very much driven by building technologies that generate profit, not necessarily
5:15
building technologies in the public interest. And in fact, increasingly, it felt like it was ushering in technologies that were antithetical to
5:22
the public interest. And so it made me confront whether or not going into this
5:29
industry was actually the best career path for what I wanted to do. And I
5:34
quickly concluded that it was not and I should try doing something else. And that's how I ultimately switched into
5:39
journalism originally to cover environmental issues because I was still focused on the idea of trying to tackle
5:49
this climate crisis. But because of my background, I was um sort of initially reluctantly
5:58
pushed into technology reporting. But then I realized that it was a really
6:03
great way to explore all of the different issues that I had seen around working in the tech industry. And then I
6:10
quickly specialized in AI reporting. And it was at MIT Technology Review that I
6:16
first started covering artificial intelligence. And TechReview is a publication that is very much focused on
6:23
cutting edge AI research and does not necessarily cover technologies once they are
6:30
commercially viable. It's it's really that fundamental research coming out of academia or
6:36
corporate labs. And so OpenAI came on my radar in 2018 and I started covering it
6:42
in 2019 because I was looking for fundamental AI research labs and OpenAI
6:49
positioned itself as a fundamental AI research lab that that had no intent for
6:54
commercialization. Um, and so I ended up being the first journalist by by pure
7:00
coincidence really to profile OpenAI and embed within the company for 3 days in
7:06
August of 2019. Mhm. So you found the company first. Uh the
7:13
company is sort of the fundamental uh like not seeking profit but you you
7:21
were like that's it like that's the AI love that's we need or is that is that
7:28
what you first thought? I was very curious to understand better
7:34
whether or not OpenAI was going to be successful in accomplishing its stated
7:40
mission which was to develop AI without any for-profit constraints and focused on the public interest. And
7:49
so when I initially approached the organization to profile them I said you know it seems like now is a good there
7:56
were a series of changes that had been happening within OpenAI. it had just restructured to have a for-profit within
8:01
the nonprofit and Sam Wolman had officially become the CEO and I said it seems like there there are some changes
8:07
that are happening with the organization but that you have really thought carefully about wanting to retain your
8:14
mission of building this technology in the public interest. So I'd like to understand that better and profile how
8:20
you're doing that because if this is a model for innovation in the public interest, we would want to highlight
8:27
that and replicate that um in other places around the world and they liked that idea and that that was part of the
8:33
reason why they invited me to embed within the organization. Um, so I did come with
8:40
a curiosity and an open mind of maybe we have really found an organization that
8:45
has found the way to walk the fine line between needing to raise money and also
8:51
needing to build technology for the public benefit. Um but unfortunately after embedding for 3 days and then
8:59
doing extensive other interviews outside my time within the company but still
9:05
with other employees I discovered that open eye was actually essentially just
9:10
the same as any other Silicon Valley company. Mhm. So we started as nonprofit with the
9:17
bold mission and uh to ensure the transparency of research at first but
9:25
how did you see all the changes happen inside the company?
9:30
The first change that signaled something was shifting was a walking back on that
9:37
transparency that promise for transparency. So they originally said they would open source all of their
9:42
research and then in early 2019 they decided to withhold a model and that
9:49
model was GBT2 two generations before chat GBT and at the time their arguments
9:56
publicly for why they were withholding the model just didn't quite add up and there was actually a lot of backlash
10:02
against opening eye from the scientific community because of the way that they handled this particular withholding
10:09
research and so they ultimately then reversed their decision and they they released the model but it was an
10:14
important signal that they were no longer being that transparent and then once I embedded within the company I
10:20
discovered that actually not only were they not that transparent they were highly secretive as an organization they
10:27
didn't want people to know what they were working on even though they explicitly said that they would always
10:34
communicate what they were working on to the public and and bring the public along along this journey of AI
10:40
development. Uh, and as I started doing more interviews, people confirmed to me,
10:46
yes, in fact, this is one of the most secretive organizations that they've ever worked for. And that transition
10:53
happened essentially shortly after Sam Alman joined the company.
10:59
Okay. And after you reported about the after
11:05
your profile open AI on MIT technology review, they stopped talking to you for
11:12
3 years, right? So why why do you think that happened?
11:19
They Yes. So they they were very unhappy with the profile because I focused the
11:25
profile ultimately on the disconnect that I saw between what OpenAI publicly
11:30
stated it was doing to accumulate a lot of goodwill among the public and among policy makers and what was actually
11:37
happening behind closed doors. And what I argued at the time was that this
11:43
connect disconnect would could potentially have consequences for the way that AI was introduced into the
11:50
world, introduced into societ society. And they didn't like the fact that I had
11:58
highlighted that disconnect. They thought that if they gave me access that
12:03
I would write a more a piece that adopted more of their company narrative
12:10
and the ultimately they decided and this is a strategy that they've engaged with
12:15
engaged with their entire history as a company and anytime they do not like what a journalist has written they bar
12:22
that journalist from continuing to have access to the organization. So they switched over to working with other
12:28
journalists instead. Mhm. Okay. And in your book you wrote a lot about some of
12:36
Altman and so how do you describe
12:42
Altman? Alman is he is a once in a generation
12:48
storytelling talent. He's very very good at telling compelling stories about the
12:53
future in a way that makes investors want to buy into that future. put money
12:59
down to back that future and to get talent to commit to building that future
13:06
with him. This is what makes him a inc a remarkable fundraising talent and a
13:14
recruiter, a really really good recruiter. The thing that has remained
13:20
true through Alwin's career though is he is a very polarizing figure. people who
13:25
work with him either say that he is the greatest tech leader of this current generation uh the the Steve Jobs of this
13:32
generation or they say that he's a liar, manipulator, abuser. And essentially
13:39
what I realized as I was reporting out the people's perspectives on him is that
13:45
their perspective really depends on what their values and their vision for the future is as well. So for people who
13:51
align with Sam Alman's vision, they think that he is one of the best leaders
13:57
ever because he's incredibly persuasive and he is the best asset you can have in
14:03
getting that capital, getting that talent to move towards the vision that you agree with. But if you have a
14:09
totally different vision of the future, then he becomes one of the greatest threats to achieving your version of the
14:16
future because he will be persuasive enough to continue to take capital and talent away from what you want to build
14:23
and towards what he wants to build. And he also has a loose relationship with the truth. So he tells different stories
14:30
to different people depending on what he thinks they need to hear to motivate them towards where he wants them to go.
14:37
And so kind of combined all of this combined has led him to be both
14:42
extremely successful against certain metrics as someone who has risen to the top of being the face of the generative
14:50
AI revolution and also someone that has his entire career been trailed by very
14:56
very loud detractors. Mhm. Mhm. So um there was so much rivalry between the
15:03
Sam Alman and Elon Musk who was also co-founder of OpenAI. So
15:11
the sort of the the tension between them is is that did that become sort of the
15:18
tipping point uh to make OpenAI more like a forprofit or those kind of the
15:26
organization? I don't think the rivalry between Altman and Musk was the tipping point for
15:33
making OpenAI more of a for-profit, but certainly the rivalry has been
15:42
an important pressure point throughout OpenAI's history and and especially in present day when when Musk is trying to
15:48
sue Alman and and block OpenAI from ultimately fully converting from
15:54
nonprofit to for-profit. And the origin of that rivalry is that Musk Altman
15:59
recruited Musk to OpenAI to the idea of co-founding OpenAI together. And the way
16:06
Musk tells it, he felt like Alman ultimately used him by saying the
16:12
stories that Musk wanted to hear about what they could accomplish together in
16:17
creating this fundamental AI research lab, how they could create a counter against for-profit AI development, how
16:24
ultimately Musk could be incredibly helpful to the project by lending his name and his money to a really young
16:32
upstart endeavor. and that when Musk lost utility to the project that Altman
16:39
then discarded him. This is Musk's version of the story. And so essentially the way that Musk feels is that he
16:45
donated his name. He donated he picked the name OpenAI in the first place. He donated a lot of money and got nothing
16:51
out of it. And and and most importantly lost control of a very consequential
16:59
technology and now he is trying to regain control with his own company XAI.
17:04
But OpenAI remains the leader both in consumer adoption, brand recognition as
17:12
compared to XAI. And so there there's this long-standing beef. But both Musk
17:17
and Alman actually early on in the days of opening I recognized that if they wanted to pursue the particular path of
17:24
AI development that they chose which is large scale AI models trained on
17:30
extraordinary amounts of data trained on extraordinarily large supercomputers that OpenAI would need to convert into a
17:36
for-profit. So there have been emails opened up in the early days of OpenAI that show that Musk was just as on board
17:43
as Altman was about converting OpenAI into a for-profit to be able to raise the necessary capital that they needed.
17:51
But the ultimately when they tried to create the for-profit, they could not agree on who should be CEO. And it was
17:58
Altman that ultimately convinced the other co-founders of OpenAI to pick him as CEO over Musk. And that's how Musk
18:06
left and to this day feels burned by this whole process.
18:11
Mhm. So there's been power struggles uh from the early days in such a massive
18:18
the in the company with such massive power. Yeah. So one of the things that I
18:25
I realized through the course of my reporting is that
18:31
OpenAI's history has been riddled with clashes between egos and ideologies.
18:38
And when you look at all of the executives that have ultimately left
18:43
OpenAI, right, they have all done exactly the same thing, which is start a rival company
18:49
that is in their image instead of in Altman's image. And the reason why they leave OpenAI is because they have
18:55
fundamental personality disagreements and ideological differences about how to develop AI. So
19:02
that includes Elon Musk leaving to start XAI. Dario Amade leaving to start
19:07
Anthropic. Um Ilia Sutzkver the chief scientist leaving to start safe super
19:14
intelligence. And Mera Moratti the former chief technology officer leaving to start thinking machines lab. And
19:20
there are of course beyond just the most senior level executives there each one as they have left have gouged some of
19:29
the employees some of the staff from open to bring with them in this uh
19:35
alternative path. Um and the ultimately
19:40
what this shows you is that many of these highle executives within the AI
19:46
industry now they kind of view this technology as an extension of their own
19:52
will and their own desires for what they want to see in the future. And so if they do not agree about how AI should be
20:00
developed and ultimately how it should be deployed, they ultimately just break off into their own faction to acrue
20:06
their own resources to build their own empire and try and compete with one another in
20:12
the marketplace and in the world. Yeah. So um ultimately what does the
20:18
empire mean for you? The reason why I
20:23
call my book Empire of AI is because I I
20:29
think there are first of all it acknowledges the fact that there's an extraordinary concentration of both
20:35
political and economic leverage in these companies and empires are
20:41
political economic entities that monopolize power in the world. But
20:47
there's also four features that I point to in the book that are parallels between empires of AI and empires of
20:54
old. The first one is that empires lay claim to resources that are not their own. And we see empires of AI do this
21:00
with the scraping of data on the internet and the use of intellectual property from artists, writers, and
21:06
creators. And they try to justify this by saying, "Oh, those resources were actually always our own. That it's in
21:12
the public domain. It's fair use." Um, the second feature is that empires will
21:18
exploit a lot of labor and that not only refers to the way that empires of AI
21:24
exploit labor in the process of producing their technology and I highlight stories of workers in Kenya,
21:30
in Venezuela that both OpenAI and other AI companies contracted to perform some
21:37
of the most disturbing types of work that um ultimately need to um that that
21:43
that they ultimately wanted to produce their technologies, but it refers to the
21:48
fact that the technology itself is labor automating in and of itself. So, OpenAI defines so-called artificial general
21:55
intelligence as highly autonomous systems that outperform humans in most economically valuable work. So they
22:02
quite explicitly articulate that they're trying to automate jobs away and that in
22:07
and of itself is going to erode workers rights, erode their bargaining power and
22:13
exploit more labor in the future. The third feature of Empire is that they
22:19
monopolize knowledge production. So in the last 10 years, we've seen the AI industry is able to give out
22:26
compensation packages that can easily cross over a million dollars for AI researchers. And so most AI researchers
22:32
have shifted from working for universities and independent research institutions to working for these
22:38
companies. And the effect on AI research would be what you would imagine the effect would be if most climate
22:44
scientists were bankrolled by oil and gas companies. You would assume that the research coming out would not accurately
22:51
represent the climate crisis. And that's essentially what has happened with AI research. does not accurately represent
22:57
how this technology works and what the limitations are anymore because things that are inconvenient to these companies
23:04
get censored. And the fourth and final feature is that empires always engage in a narrative that there are good empires
23:10
and there are evil empires in the world and they the good empire have to be an empire in the first place to be able to
23:17
be strong enough to beat back the evil empire. And throughout openio's history,
23:23
they have always identified who the evil empire is, but it has shifted based on what's convenient. So originally, the
23:30
evil empire was Google. Increasingly, the evil empire is China. And the argument is that if the evil empire gets
23:38
the technology first, then humanity will go to hell. But if they get unfettered access to resources, labor, land,
23:45
energy, water, then they, the good empire, can use these resources to civilize the world, bring progress and
23:52
modernity to all of humanity, and humanity will have a chance to go to heaven. And that is not only the
23:58
rhetoric that was used by European empires during the colonial era, but is quite literally also the same language
24:05
used by AI companies today. Thank you. So that leads to the part two
24:10
the hidden cost of AI empire. So um as
24:16
you wrote in the book uh you flew to Kenya and Chile uh to see how the AI the
24:25
how what the hidden cost of AI are. So what um struck you the most when you
24:35
cover those areas? You know, one of the things that
24:43
struck me, one of the reasons why I decided to take those trips is because
24:49
you do not fully see the logic of empire
24:55
until you are on the ground with these communities and that's when it becomes irrefutable
25:01
that this really is an empire. Because when I was with the Kenyon workers, these were workers that opening eye
25:07
contracted to build to perform content moderation essentially for what would ultimately become chatbt and they
25:15
suffered the same consequences that content moderators of the social media era suffered. They ended up completely
25:23
traumatized. Their mental health was broken down. the people who depended on them and their families and their
25:29
communities were also affected by losing a critical member of their community. So I highlight this one man Moof Kin who I
25:36
met who he was on the sexual content team for OpenAI which meant that he was
25:42
reading through reams of the worst sexual content on the internet as well as AI generated content that OpenAI was
25:48
prompting models to imagine the worst sexual content on the internet. And he then had to categorize this into
25:55
detailed a detailed hierarchy of sexual content. The the benign sexual content,
26:02
sexual content that involved abuse and therefore should be categorized as higher severity and sexual content that
26:08
involved abuse of minors which was the highest severity. And through that
26:13
process he completely changed as a person. He was originally extroverted
26:19
became very very introverted and withdrawn in on himself. and he couldn't explain to his wife and to his
26:26
stepdaughter, her daughter, from a different relationship, what was actually happening because he didn't know how to say to them that he read sex
26:33
content all day for his job. That sounded not like a real job. It sounded like a very shameful job. And so one day
26:40
his wife asked him, "I would like fish for dinner." And he went out and bought three fish. One for him, one for her,
26:46
one for the stepdaughter. And by the time he came back home, all of their stuff, all of their belongings had been
26:52
packed up and they had left. And she texted him, "I don't understand the man you've become anymore. We're not coming
26:59
back." And when you when you start to sit with the impact
27:06
that this technology production is having on the most vulnerable people in society. And on top of that, for this
27:15
devastating impact, he was paid a few dollars an hour, while OpenAI researchers sitting within the company
27:22
were being paid milliondoll compensation packages or more annually. That's when
27:27
you start to realize the logic of empire because it is philosophically
27:33
it there is no rationalization for this treatment dehumanization of some people
27:40
in the world other than an ideological one of empire that there are certain
27:46
people born to be superior and others that are born to be inferior and therefore the superior people have the
27:52
right to subjugate those people in the world and I felt the very Same thing
27:57
when I was in Chile and in fact the Chilean I I was embedding with Chilean
28:03
water activists who were fighting against data centers being built by these companies to train and deploy
28:10
their AI models. And the Chilean activists said themselves before I ever mentioned the word empire. They were
28:16
like, "In order to understand our story today, you have to understand our history and the fact that for centuries
28:24
we our communities in Chile, the working class, the marginalized, the indigenous,
28:30
it's our communities that have for centuries been asked to have our natural resources gouged out. We've been
28:38
dispossessed repeatedly by higher powers in service of something that's that's
28:43
never going to benefit us." Mhm. and they said
28:49
that is empire you know like that that they made that connection before I did. Okay. But the outsourcing the content
28:56
moderation or data annotation jobs is not new in the tech industry. I guess the
29:04
content especially content moderation happen on YouTube and Facebook and other
29:10
social media. And I I've seen a lot of reports about Asian people struggling
29:16
with this that like outsource from the company in the US. But it is it is the
29:23
difference is big uh between those and gener generative AI models development.
29:30
Yeah, we're talking about a fundamentally different degree of scale with AI development versus with previous
29:36
social media era companies. So if you look at meta which has spanned both the social media era and the generative AI
29:44
era just looking at the amount of data that they have needed to accumulate in
29:50
order to train their AI models like in in the social media era they acrewed around 4 million accounts worth of user
29:57
data to power their ad targeting engine. And once they shifted into generative AI
30:03
era, they were like these 4 million user accounts of data or sorry 4 billion 4 billion user accounts of data is not
30:10
enough. It's not sufficient to train competitive generative AI models. And so
30:16
they literally started talking about should we buy up should we just acquire
30:22
Simon and Schustster? Should we look for other types of data elsewhere? should we
30:28
continue to scrape things off the internet? And eventually what they concluded was to not buy a publishing
30:34
house, but to just download torrent all of those books off of the dark web and continue to feed ever more data into
30:41
their models. And so that is just one data point for understanding that the scale has jumped from social media era
30:48
to generative AI era. And that also means the scale for the content moderation and the labor contracting
30:54
workforce has jumped to meet that demand. And you can see that scale jump with
31:00
data centers as well. So, so before with data center development, we were not talking about a level of data center
31:06
development that would actually strain the global grid in terms of electricity.
31:11
Now we are. And there was a recent McKenzie report that projected that
31:16
based on the current expansion of data centers and supercomputers just to bolster the AI development and
31:23
deployment, we would need to add two to six times the amount of energy consumed
31:28
annually in the state of California, which is the fifth largest economy in the world, onto the global grid in the
31:34
next 5 years. And most of that will be fossil fuels. Mhm. And so we are reaching a fundamentally
31:43
different tier of data consumption, data scraping, contract work, data center
31:50
development, resource extraction, energy consumption than ever before.
31:56
Mhm. So that's almost why Meta acquired 49% stake in scale AI, right? did that
32:04
just happened that last week uh in this month. So what did you think about those
32:12
moves? Yeah, it's exactly right. They have recognized, Meta has recognized that in
32:18
order to continue pursuing this particularly large scale AI development approach, one of the key ingredients for
32:25
being competitive is that labor force. And scale AI, which is one of the companies that I write about in my book,
32:32
is has been particularly successful from a business metric standpoint at
32:38
accumulating a large base of labor and paying them very very little money. And
32:45
ultimately, Meta wants access to that workforce as well as the actual um
32:54
business intelligence that scale has accumulated from servicing so many different AI companies through its
32:59
contract workforce because and this this once again highlights how important these workers actually are. They
33:07
actually have access to a lot of the secret sauce of these companies and their AI development. Without these
33:13
contract workers, those AI models really would not work remotely like they do
33:19
today. And all of those instructions scale has a hold of that. That is extraordinary
33:25
amounts of value. So Meta was acquiring not just the workforce, but also those instruction manuals and the ability to
33:33
peer into the strategies of what its competitors have been doing to develop
33:38
their AI technologies. Right. Right. So do you think those exploitation could
33:45
also happen in countries like Japan because there's a lot of data centers uh constructions going on in all over Japan
33:52
and we're talking about uh what's called the digital trade deficit. Um it's
34:00
called so we that's because we use a lot of the software and services in the US.
34:06
So what's your take on that? Japan is very vulnerable to both the resource exploitation and the the labor
34:13
exploitation because ultimately you know OpenAI is trying to um aggressively
34:19
expand its consumer base in Japan. It's why it Tokyo was one of the first
34:25
offices foreign offices that it set up. And when they are trying to cater to
34:32
that the the Japanese population, they need Japanese language speakers to do the kind of concept moderation model
34:38
curation that the Kenyan workers did for English-sp speakaking models. And so
34:44
there are only so many places in the world that have large Japanese speaking populations and Japan is one of the
34:50
primary ones. And so they will absolutely look for work forces in Japan
34:56
that in more economically vulnerable parts of Japan to recruit those workers and try and replicate the playbook that
35:03
they have used in other for other models in in English language development. And
35:10
they are also dramatically trying to expand the amount of data centers that are being built in Japan. I actually for
35:16
my book I spoke with a Japanese-based data center investor who told me that
35:23
the amount of power that these companies are seeking to try and create largecale
35:32
data center construction is something that that has just never been seen before. And these are these are projects
35:39
that they're not just putting down data centers with existing power
35:44
infrastructure. They are planning where to build more power plants so that they
35:51
have enough power to deliver to these data centers. But we've already seen reporting out of the US that when this
35:57
happens, when you land massive data centers, massive new power um power
36:04
plants in different parts of the US, primarily in rural communities where there's a lot of land, cheap land, cheap
36:11
electricity and things like that, it can lead to really weird distortions in the electric grid. It can erode electric
36:18
resilience, the grid's resilience. it can hike up the utility prices for
36:23
people well beyond just that particular community. And so that is something that
36:28
Japanese residents are going to have to be worry about now is that as these data centers continue to sprawl in places
36:36
that might be hidden from sight from most urban centers, they will still feel
36:41
the impacts on their water utilities, their energy utilities. they might start feeling the impacts on air quality if
36:49
the power plants are fossil fuel-based and that will have ripple effects
36:54
long-standing ripple effects well beyond just the next few years right okay yeah that's that's a great
37:01
insight um so sorry for I I don't want to keep you stay long but the there's a
37:07
final part AI a call for a governance so I think it's the investors the poor
37:15
lots of money into AI startup uh like OpenAI so they can accelerate their
37:21
research and de development and the one of the key investors is Japan-based soft
37:27
bank and so those are the investors um play an
37:33
important role in here. So how do you think investors hold um hold themselves
37:38
accountable for AI development or what or do you mean like what is the
37:44
responsibility that investors have had? Yes. Yes. Um investors have had a huge
37:51
responsibility in facilitating this current this current reckless race in AI
37:58
development. And that is because many of the investors in the AI era are making
38:06
bets that are not based on the the sustainability of the business or
38:13
financials of their investment, but on the idea that they might be able to cash out somewhere along the bubble before it
38:20
bursts and earn back at least their investment. And the problem is, you
38:27
know, I I was just talking with investors this morning that were saying to me that the bubble has gotten
38:34
extraordinarily big and the risk is going to be inherited by the entire economy because some of this investment
38:40
is coming from university endowments, people's retirement funds, um from all parts of the the financial markets. And
38:50
so when the BOA pops, if the investment doesn't get returned, like that is people's retirements, life savings that
38:56
are actually going up in smoke, not just, you know, funny money that doesn't
39:01
doesn't uh doesn't impact people if it disappears. Um and right now we are
39:08
seeing a lot of investors pumping so much money into the development of this
39:13
technology without necessarily getting the returns back in part because there is just so much hype that people are
39:19
really afraid of losing out. And a lot of different funds have realized that if
39:24
they say that they are investing in AI, it is a really great way to amass more
39:29
capital, not necessarily a way to generate more returns, but simply to just fund raise
39:36
faster. And there are significant career risks for investors to go against the AI
39:43
hype train. So there's like all of these different compounding factors that are leading investors to glom onto this this
39:51
investment without necessarily actually seeing smart financial
39:56
without actually seeing the math work out. And unfortunately that has continued to perpetuate the bubble well
40:03
beyond what is sound. And ultimately, if we want to shift towards other forms of
40:11
AI development that are going to be more beneficial and less costly, we do need to
40:18
redistribute the capital first and foremost to those other approaches. And so investors have to move to those other
40:25
approaches and take bold action in moving to those other approaches in order to get there.
40:30
Right. Right. Um so f final part is um about AGI.
40:37
Everyone's talking about AGI. Everyone says that AGI is a few years away but the definition as you wrote in the book
40:44
definition is really vague. So how do we
40:50
think about um how do we get ready for AGI or how
40:57
how do we get ready for the world like that? So I would push back against the idea
41:03
that everyone is saying AGI is a few years away. It's only people who
41:09
could make a lot of money from everyone believing that AGI is a few years away that are actually saying
41:15
that. So, there was a New York Times article recently that the headline was
41:22
why we likely won't get AGI anytime soon. And it cited this stat of a survey
41:28
of wellrespected AI researchers in the field. And 75% of them think that we
41:35
don't even have the techniques yet to develop AGI, if we will ever at all. Mhm.
41:40
And so I think it's really important to not put the cart before the horse and start talking about how would we live
41:46
with AGI and just recognize that we this is not coming anytime soon.
41:52
Scientists don't actually think it's coming anytime soon. They don't even know if it's going to come ever.
41:57
And we should ultimately focus on how can we thoughtfully
42:04
redistribute capital currently, redistribute resources currently to develop AI technologies that have
42:09
nothing to do with the quest for AGI, but just are beneficial applications that are task specific target well
42:17
scoped challenges. Whether that's improving health care by identifying cancer earlier in MRI scans or improving
42:24
drug discovery such as with Deep Mind's alpha fold system which accurately predicts how proteins fold from their
42:31
amino acid sequence and won the Nobel Prize in chemistry in 2024 or um focus
42:37
on task specific AI systems that integrate more renewable energy into the grid so that we can transition to a
42:43
cleaner energy future faster or help discover new materials s that can
42:49
improve our our energy storage capacity. Um, all of those types of AI technologies are really where we should
42:56
be putting our time, energy, and focus and we should really pivot away from
43:04
this quest to build a so-called everything machine because it's probably not going to arrive. It is consuming a
43:11
colossal amount of resources right now. is perpetuating extraordinary amounts of labor harms
43:16
and it ultimately isn't actually bringing us the economic benefits that people trying to build this technology
43:24
have said it would. The the track record of this this technology has been
43:29
middling at best in actually bringing productivity to people. Um, and so I I I
43:38
really do think that we have the opportunity to actually help AI build AI technologies that work
43:46
for us rather than consume all of these resources to build an AI that ultimately
43:52
we are serving. So you're saying that everything doesn't need to be generative AI models, right?
44:00
Exactly. It's it's generative AI is a tiny slice of the full array of AI
44:05
technologies and it is the one that to date has had the worst cost benefit
44:11
trade-offs. Mhm. Mhm. Maybe I have to let you go. So, thank you. Uh it was great to talk.
44:18
Thank you for joining us today. Thank you so much for having me. Yeah. Thank you.
44:27
[Music]




アルマンは一世代に一度しかいない物語の才能の持ち主です。
0:06
彼と一緒に働く人たちは、彼が最高のテクノロジーリーダーだと言うか、嘘つき、策略家だと言うか、
0:13
虐待者。彼らは衝突の多い組織として非常に秘密主義だった。
0:19
エゴとイデオロギーの狭間で。日本は資源搾取と労働力不足の両方に対して非常に脆弱である。
0:25
搾取。AI帝国は自らの技術を生み出す過程で労働者を搾取します。
0:32
[音楽]
0:53
オープンAI。[音楽]
1:06
帝国
1:22
AIの。
1:44
さて、ここで『Empire of AI』の著者、カレン・ハウさんにご参加いただいています。
1:49
いらっしゃいました。お招きいただき、ありがとうございます。はい。それでは香港からご参加ですね
1:55
今日はそうですね。それで、今は香港に拠点を置いているんですか?もうアメリカにはいないんですか?
2:02
そうです。今は香港に拠点を置いています。わかりました。それでは、本の出版おめでとうございます。そして
2:10
詳細に入る前に、ええと、記事を投稿してから1ヶ月が経ちましたね。それで
2:16
これまでのところ、あなたの本への反応はどうですか?ええ、とても感謝しています
2:23
概ね好評で、人々は本当に
2:28
この本に掲載されているOpenAIの内幕を伝えるレポートと、
2:35
OpenAIのような企業を新たな形態の帝国として理解する必要があるという私の主張に賛同する人もいます。
2:43
議論には参加しないけど、報道には感謝しているけど、本当に感謝している
2:50
えっと、まさに期待していた通りの反応ですね。なるほど。ああ、そう言ってくれて嬉しいです。
2:57
日本の出版社とも交渉中ですか?ええ、実は日本の出版社が
3:04
この版は私たちが契約した最初の海外版です。来年初めに出版される予定です。
3:09
ああ、来年早々ですね。楽しみです。それで、えーと、今日は
3:16
インタビューは3つのセグメントに分かれており、1つはAIとオープンの帝国の中核です
3:23
オープンAIの変革についてお聞きしたいのですが、AIレポートに携わるようになったきっかけは何ですか?
3:29
オープンAIをカバーしていますが、時間の経過とともに会社が変わったと感じていますか?
3:35
1つ目と2つ目はAI帝国の隠れたコストです。それで、
3:41
walk us through what you wrote about the cost behind the large AI models. So
3:47
third one is a call for AI governance. Uh that's also what you wrote uh the
3:53
importance of AI governance with a a wider con concentration of the power in
3:59
AI is happening. So we'll dive deeper on that.
4:04
That sounds great. Yeah. So the first part the core of empire of AI. Um the first you want to
4:11
ask I I want to ask you that you earned a degree of mechanical engineering at MIT but you got into journalism and
4:19
especially AI reporting. So and you're one of the reporters who has been
4:24
covering open AAI from the very early days. So could you quickly walk us
4:30
through how all that happened? Yeah. Yeah. So I I studied mechanical
4:37
engineering because I was very interested in understanding how to use
4:42
technology as a tool for social change. At the time I was quite interested in environmental issues and I thought if we
4:49
could develop products technology products that facilitate consumer shifts in consumer behavior that would be one
4:56
of the ways to mitigate the climate crisis. But quickly after graduating, I
5:02
went to work in Silicon Valley, worked at a startup with a sustainability mission. And I realized that the Silicon
5:09
Valley model of innovation is very much driven by building technologies that generate profit, not necessarily
5:15
building technologies in the public interest. And in fact, increasingly, it felt like it was ushering in technologies that were antithetical to
5:22
the public interest. And so it made me confront whether or not going into this
5:29
industry was actually the best career path for what I wanted to do. And I
5:34
quickly concluded that it was not and I should try doing something else. And that's how I ultimately switched into
5:39
journalism originally to cover environmental issues because I was still focused on the idea of trying to tackle
5:49
this climate crisis. But because of my background, I was um sort of initially reluctantly
5:58
pushed into technology reporting. But then I realized that it was a really
6:03
great way to explore all of the different issues that I had seen around working in the tech industry. And then I
6:10
quickly specialized in AI reporting. And it was at MIT Technology Review that I
6:16
first started covering artificial intelligence. And TechReview is a publication that is very much focused on
6:23
cutting edge AI research and does not necessarily cover technologies once they are
6:30
commercially viable. It's it's really that fundamental research coming out of academia or
6:36
corporate labs. And so OpenAI came on my radar in 2018 and I started covering it
6:42
in 2019 because I was looking for fundamental AI research labs and OpenAI
6:49
positioned itself as a fundamental AI research lab that that had no intent for
6:54
commercialization. Um, and so I ended up being the first journalist by by pure
7:00
coincidence really to profile OpenAI and embed within the company for 3 days in
7:06
August of 2019. Mhm. So you found the company first. Uh the
7:13
company is sort of the fundamental uh like not seeking profit but you you
7:21
were like that's it like that's the AI love that's we need or is that is that
7:28
what you first thought? I was very curious to understand better
7:34
whether or not OpenAI was going to be successful in accomplishing its stated
7:40
mission which was to develop AI without any for-profit constraints and focused on the public interest. And
7:49
so when I initially approached the organization to profile them I said you know it seems like now is a good there
7:56
were a series of changes that had been happening within OpenAI. it had just restructured to have a for-profit within
8:01
the nonprofit and Sam Wolman had officially become the CEO and I said it seems like there there are some changes
8:07
that are happening with the organization but that you have really thought carefully about wanting to retain your
8:14
mission of building this technology in the public interest. So I'd like to understand that better and profile how
8:20
you're doing that because if this is a model for innovation in the public interest, we would want to highlight
8:27
that and replicate that um in other places around the world and they liked that idea and that that was part of the
8:33
reason why they invited me to embed within the organization. Um, so I did come with
8:40
a curiosity and an open mind of maybe we have really found an organization that
8:45
has found the way to walk the fine line between needing to raise money and also
8:51
needing to build technology for the public benefit. Um but unfortunately after embedding for 3 days and then
8:59
doing extensive other interviews outside my time within the company but still
9:05
with other employees I discovered that open eye was actually essentially just
9:10
the same as any other Silicon Valley company. Mhm. So we started as nonprofit with the
9:17
bold mission and uh to ensure the transparency of research at first but
9:25
how did you see all the changes happen inside the company?
9:30
The first change that signaled something was shifting was a walking back on that
9:37
transparency that promise for transparency. So they originally said they would open source all of their
9:42
research and then in early 2019 they decided to withhold a model and that
9:49
model was GBT2 two generations before chat GBT and at the time their arguments
9:56
publicly for why they were withholding the model just didn't quite add up and there was actually a lot of backlash
10:02
against opening eye from the scientific community because of the way that they handled this particular withholding
10:09
research and so they ultimately then reversed their decision and they they released the model but it was an
10:14
important signal that they were no longer being that transparent and then once I embedded within the company I
10:20
discovered that actually not only were they not that transparent they were highly secretive as an organization they
10:27
didn't want people to know what they were working on even though they explicitly said that they would always
10:34
communicate what they were working on to the public and and bring the public along along this journey of AI
10:40
development. Uh, and as I started doing more interviews, people confirmed to me,
10:46
yes, in fact, this is one of the most secretive organizations that they've ever worked for. And that transition
10:53
happened essentially shortly after Sam Alman joined the company.
10:59
Okay. And after you reported about the after
11:05
your profile open AI on MIT technology review, they stopped talking to you for
11:12
3 years, right? So why why do you think that happened?
11:19
They Yes. So they they were very unhappy with the profile because I focused the
11:25
profile ultimately on the disconnect that I saw between what OpenAI publicly
11:30
stated it was doing to accumulate a lot of goodwill among the public and among policy makers and what was actually
11:37
happening behind closed doors. And what I argued at the time was that this
11:43
connect disconnect would could potentially have consequences for the way that AI was introduced into the
11:50
world, introduced into societ society. And they didn't like the fact that I had
11:58
highlighted that disconnect. They thought that if they gave me access that
12:03
I would write a more a piece that adopted more of their company narrative
12:10
and the ultimately they decided and this is a strategy that they've engaged with
12:15
engaged with their entire history as a company and anytime they do not like what a journalist has written they bar
12:22
that journalist from continuing to have access to the organization. So they switched over to working with other
12:28
journalists instead. Mhm. Okay. And in your book you wrote a lot about some of
12:36
Altman and so how do you describe
12:42
Altman? Alman is he is a once in a generation
12:48
彼は物語を語る才能に恵まれています。彼は、
12時53分
投資家がその将来に投資したくなるような方法で将来を予測する。
12時59分
その未来を支え、その未来を築くために才能を発揮してもらう
13時06分
彼と一緒に。これが彼を素晴らしい資金調達の才能と
13時14分
採用担当者、本当に優秀な採用担当者。
13時20分
アルウィンのキャリアを通して言えることは、彼は非常に賛否両論の人物だということです。
13時25分
彼と一緒に仕事をするということは、彼が今の世代の最高の技術リーダーだと言うことです。つまり、この時代のスティーブ・ジョブズです。
13時32分
世代の人々は彼を嘘つき、人を操る、虐待者だと言います。そして本質的には
13時39分
彼に対する人々の見解を報道する中で私が気づいたのは
13時45分
彼らの視点は、彼らの価値観や将来のビジョンによって大きく左右されます。
13時51分
サム・アルマンのビジョンに共感する人々は、彼が最高のリーダーの一人だと考えている。
13時57分
彼は信じられないほど説得力があり、あなたが持つことができる最高の資産です
14:03
資本を獲得し、才能を獲得して、あなたが賛同するビジョンに向かって前進する。しかし、もしあなたが
14:09
全く異なる未来のビジョンを持っている場合、彼はあなたのビジョンを実現する上で最大の脅威の1つになります
14時16分
彼は、あなたが築きたいものから資本と才能を奪い続けるのに十分な説得力を持っているからです
14時23分
そして、自分が築きたいものに向かって。そして、真実とはゆるやかな関係を築いている。だから、彼は様々な物語を語るのだ
14時30分
彼は、自分が望む方向へ向かわせるために、相手が何を聞く必要があるかに応じて、相手にさまざまなメッセージを伝えます。
14時37分
そして、これらすべてが組み合わさって、彼は
14時42分
ジェネレーティブ・メディアの顔として頂点に上り詰めた人物として、ある基準では非常に成功した
14時50分
AI革命の人物であり、また彼のキャリア全体を通して非常に
14時56分
非常に声高に批判する人たち。ええ、ええ。それで、ええと、
15:03
サム・アルマンと、OpenAIの共同創設者でもあるイーロン・マスク。
15:11
彼らの間の緊張のようなものは、それが一種の
15時18分
OpenAIを営利企業に近づける転換点
15:26
組織?アルトマンとマスクの対立が転換点だったとは思わない
15時33分
OpenAIはより営利企業になりつつあるが、確かに競争は
15時42分
これはOpenAIの歴史を通して、そして特にマスク氏が
15時48分
アルマンを訴え、OpenAIが最終的に完全に
15時54分
nonprofit to for-profit. And the origin of that rivalry is that Musk Altman
15:59
recruited Musk to OpenAI to the idea of co-founding OpenAI together. And the way
16:06
Musk tells it, he felt like Alman ultimately used him by saying the
16:12
stories that Musk wanted to hear about what they could accomplish together in
16:17
creating this fundamental AI research lab, how they could create a counter against for-profit AI development, how
16:24
ultimately Musk could be incredibly helpful to the project by lending his name and his money to a really young
16:32
upstart endeavor. and that when Musk lost utility to the project that Altman
16:39
then discarded him. This is Musk's version of the story. And so essentially the way that Musk feels is that he
16:45
donated his name. He donated he picked the name OpenAI in the first place. He donated a lot of money and got nothing
16:51
out of it. And and and most importantly lost control of a very consequential
16:59
technology and now he is trying to regain control with his own company XAI.
17:04
But OpenAI remains the leader both in consumer adoption, brand recognition as
17:12
compared to XAI. And so there there's this long-standing beef. But both Musk
17:17
and Alman actually early on in the days of opening I recognized that if they wanted to pursue the particular path of
17:24
AI development that they chose which is large scale AI models trained on
17:30
extraordinary amounts of data trained on extraordinarily large supercomputers that OpenAI would need to convert into a
17:36
for-profit. So there have been emails opened up in the early days of OpenAI that show that Musk was just as on board
17:43
as Altman was about converting OpenAI into a for-profit to be able to raise the necessary capital that they needed.
17:51
But the ultimately when they tried to create the for-profit, they could not agree on who should be CEO. And it was
17:58
Altman that ultimately convinced the other co-founders of OpenAI to pick him as CEO over Musk. And that's how Musk
18:06
left and to this day feels burned by this whole process.
18:11
Mhm. So there's been power struggles uh from the early days in such a massive
18:18
the in the company with such massive power. Yeah. So one of the things that I
18:25
I realized through the course of my reporting is that
18:31
OpenAI's history has been riddled with clashes between egos and ideologies.
18:38
And when you look at all of the executives that have ultimately left
18:43
OpenAI, right, they have all done exactly the same thing, which is start a rival company
18:49
that is in their image instead of in Altman's image. And the reason why they leave OpenAI is because they have
18:55
fundamental personality disagreements and ideological differences about how to develop AI. So
19:02
that includes Elon Musk leaving to start XAI. Dario Amade leaving to start
19:07
Anthropic. Um Ilia Sutzkver the chief scientist leaving to start safe super
19:14
intelligence. And Mera Moratti the former chief technology officer leaving to start thinking machines lab. And
19:20
there are of course beyond just the most senior level executives there each one as they have left have gouged some of
19:29
the employees some of the staff from open to bring with them in this uh
19:35
alternative path. Um and the ultimately
19:40
what this shows you is that many of these highle executives within the AI
19:46
industry now they kind of view this technology as an extension of their own
19:52
will and their own desires for what they want to see in the future. And so if they do not agree about how AI should be
20:00
developed and ultimately how it should be deployed, they ultimately just break off into their own faction to acrue
20:06
their own resources to build their own empire and try and compete with one another in
20:12
the marketplace and in the world. Yeah. So um ultimately what does the
20:18
empire mean for you? The reason why I
20:23
call my book Empire of AI is because I I
20:29
think there are first of all it acknowledges the fact that there's an extraordinary concentration of both
20:35
political and economic leverage in these companies and empires are
20:41
political economic entities that monopolize power in the world. But
20:47
there's also four features that I point to in the book that are parallels between empires of AI and empires of
20:54
old. The first one is that empires lay claim to resources that are not their own. And we see empires of AI do this
21:00
with the scraping of data on the internet and the use of intellectual property from artists, writers, and
21:06
creators. And they try to justify this by saying, "Oh, those resources were actually always our own. That it's in
21:12
the public domain. It's fair use." Um, the second feature is that empires will
21:18
exploit a lot of labor and that not only refers to the way that empires of AI
21:24
exploit labor in the process of producing their technology and I highlight stories of workers in Kenya,
21:30
in Venezuela that both OpenAI and other AI companies contracted to perform some
21:37
of the most disturbing types of work that um ultimately need to um that that
21:43
that they ultimately wanted to produce their technologies, but it refers to the
21:48
fact that the technology itself is labor automating in and of itself. So, OpenAI defines so-called artificial general
21:55
intelligence as highly autonomous systems that outperform humans in most economically valuable work. So they
22:02
quite explicitly articulate that they're trying to automate jobs away and that in
22:07
and of itself is going to erode workers rights, erode their bargaining power and
22:13
exploit more labor in the future. The third feature of Empire is that they
22:19
monopolize knowledge production. So in the last 10 years, we've seen the AI industry is able to give out
22:26
compensation packages that can easily cross over a million dollars for AI researchers. And so most AI researchers
22:32
have shifted from working for universities and independent research institutions to working for these
22:38
companies. And the effect on AI research would be what you would imagine the effect would be if most climate
22:44
scientists were bankrolled by oil and gas companies. You would assume that the research coming out would not accurately
22:51
represent the climate crisis. And that's essentially what has happened with AI research. does not accurately represent
22:57
how this technology works and what the limitations are anymore because things that are inconvenient to these companies
23:04
get censored. And the fourth and final feature is that empires always engage in a narrative that there are good empires
23:10
and there are evil empires in the world and they the good empire have to be an empire in the first place to be able to
23:17
be strong enough to beat back the evil empire. And throughout openio's history,
23:23
they have always identified who the evil empire is, but it has shifted based on what's convenient. So originally, the
23:30
evil empire was Google. Increasingly, the evil empire is China. And the argument is that if the evil empire gets
23:38
the technology first, then humanity will go to hell. But if they get unfettered access to resources, labor, land,
23:45
energy, water, then they, the good empire, can use these resources to civilize the world, bring progress and
23:52
modernity to all of humanity, and humanity will have a chance to go to heaven. And that is not only the
23:58
rhetoric that was used by European empires during the colonial era, but is quite literally also the same language
24:05
used by AI companies today. Thank you. So that leads to the part two
24:10
the hidden cost of AI empire. So um as
24:16
you wrote in the book uh you flew to Kenya and Chile uh to see how the AI the
24:25
how what the hidden cost of AI are. So what um struck you the most when you
24:35
cover those areas? You know, one of the things that
24:43
struck me, one of the reasons why I decided to take those trips is because
24:49
you do not fully see the logic of empire
24:55
until you are on the ground with these communities and that's when it becomes irrefutable
25:01
that this really is an empire. Because when I was with the Kenyon workers, these were workers that opening eye
25:07
contracted to build to perform content moderation essentially for what would ultimately become chatbt and they
25:15
suffered the same consequences that content moderators of the social media era suffered. They ended up completely
25:23
traumatized. Their mental health was broken down. the people who depended on them and their families and their
25:29
communities were also affected by losing a critical member of their community. So I highlight this one man Moof Kin who I
25:36
met who he was on the sexual content team for OpenAI which meant that he was
25:42
reading through reams of the worst sexual content on the internet as well as AI generated content that OpenAI was
25:48
prompting models to imagine the worst sexual content on the internet. And he then had to categorize this into
25:55
detailed a detailed hierarchy of sexual content. The the benign sexual content,
26:02
sexual content that involved abuse and therefore should be categorized as higher severity and sexual content that
26:08
involved abuse of minors which was the highest severity. And through that
26:13
process he completely changed as a person. He was originally extroverted
26:19
became very very introverted and withdrawn in on himself. and he couldn't explain to his wife and to his
26:26
stepdaughter, her daughter, from a different relationship, what was actually happening because he didn't know how to say to them that he read sex
26:33
content all day for his job. That sounded not like a real job. It sounded like a very shameful job. And so one day
26:40
his wife asked him, "I would like fish for dinner." And he went out and bought three fish. One for him, one for her,
26:46
one for the stepdaughter. And by the time he came back home, all of their stuff, all of their belongings had been
26:52
packed up and they had left. And she texted him, "I don't understand the man you've become anymore. We're not coming
26:59
back." And when you when you start to sit with the impact
27:06
that this technology production is having on the most vulnerable people in society. And on top of that, for this
27:15
devastating impact, he was paid a few dollars an hour, while OpenAI researchers sitting within the company
27:22
were being paid milliondoll compensation packages or more annually. That's when
27:27
you start to realize the logic of empire because it is philosophically
27:33
it there is no rationalization for this treatment dehumanization of some people
27:40
in the world other than an ideological one of empire that there are certain
27:46
people born to be superior and others that are born to be inferior and therefore the superior people have the
27:52
right to subjugate those people in the world and I felt the very Same thing
27:57
when I was in Chile and in fact the Chilean I I was embedding with Chilean
28:03
water activists who were fighting against data centers being built by these companies to train and deploy
28:10
their AI models. And the Chilean activists said themselves before I ever mentioned the word empire. They were
28:16
like, "In order to understand our story today, you have to understand our history and the fact that for centuries
28:24
we our communities in Chile, the working class, the marginalized, the indigenous,
28:30
it's our communities that have for centuries been asked to have our natural resources gouged out. We've been
28:38
dispossessed repeatedly by higher powers in service of something that's that's
28:43
never going to benefit us." Mhm. and they said
28:49
that is empire you know like that that they made that connection before I did. Okay. But the outsourcing the content
28:56
moderation or data annotation jobs is not new in the tech industry. I guess the
29:04
content especially content moderation happen on YouTube and Facebook and other
29:10
social media. And I I've seen a lot of reports about Asian people struggling
29:16
with this that like outsource from the company in the US. But it is it is the
29:23
difference is big uh between those and gener generative AI models development.
29:30
Yeah, we're talking about a fundamentally different degree of scale with AI development versus with previous
29:36
social media era companies. So if you look at meta which has spanned both the social media era and the generative AI
29:44
era just looking at the amount of data that they have needed to accumulate in
29:50
order to train their AI models like in in the social media era they acrewed around 4 million accounts worth of user
29:57
data to power their ad targeting engine. And once they shifted into generative AI
30:03
era, they were like these 4 million user accounts of data or sorry 4 billion 4 billion user accounts of data is not
30:10
enough. It's not sufficient to train competitive generative AI models. And so
30:16
they literally started talking about should we buy up should we just acquire
30:22
Simon and Schustster? Should we look for other types of data elsewhere? should we
30:28
continue to scrape things off the internet? And eventually what they concluded was to not buy a publishing
30:34
house, but to just download torrent all of those books off of the dark web and continue to feed ever more data into
30:41
their models. And so that is just one data point for understanding that the scale has jumped from social media era
30:48
to generative AI era. And that also means the scale for the content moderation and the labor contracting
30:54
workforce has jumped to meet that demand. And you can see that scale jump with
31:00
data centers as well. So, so before with data center development, we were not talking about a level of data center
31:06
development that would actually strain the global grid in terms of electricity.
31:11
Now we are. And there was a recent McKenzie report that projected that
31:16
based on the current expansion of data centers and supercomputers just to bolster the AI development and
31:23
deployment, we would need to add two to six times the amount of energy consumed
31:28
annually in the state of California, which is the fifth largest economy in the world, onto the global grid in the
31:34
next 5 years. And most of that will be fossil fuels. Mhm. And so we are reaching a fundamentally
31:43
different tier of data consumption, data scraping, contract work, data center
31:50
development, resource extraction, energy consumption than ever before.
31:56
Mhm. So that's almost why Meta acquired 49% stake in scale AI, right? did that
32:04
just happened that last week uh in this month. So what did you think about those
32:12
moves? Yeah, it's exactly right. They have recognized, Meta has recognized that in
32:18
order to continue pursuing this particularly large scale AI development approach, one of the key ingredients for
32:25
being competitive is that labor force. And scale AI, which is one of the companies that I write about in my book,
32:32
is has been particularly successful from a business metric standpoint at
32:38
accumulating a large base of labor and paying them very very little money. And
32:45
ultimately, Meta wants access to that workforce as well as the actual um
32:54
business intelligence that scale has accumulated from servicing so many different AI companies through its
32:59
contract workforce because and this this once again highlights how important these workers actually are. They
33:07
actually have access to a lot of the secret sauce of these companies and their AI development. Without these
33:13
contract workers, those AI models really would not work remotely like they do
33:19
today. And all of those instructions scale has a hold of that. That is extraordinary
33:25
amounts of value. So Meta was acquiring not just the workforce, but also those instruction manuals and the ability to
33:33
peer into the strategies of what its competitors have been doing to develop
33:38
their AI technologies. Right. Right. So do you think those exploitation could
33:45
also happen in countries like Japan because there's a lot of data centers uh constructions going on in all over Japan
33:52
and we're talking about uh what's called the digital trade deficit. Um it's
34:00
called so we that's because we use a lot of the software and services in the US.
34:06
So what's your take on that? Japan is very vulnerable to both the resource exploitation and the the labor
34:13
exploitation because ultimately you know OpenAI is trying to um aggressively
34:19
expand its consumer base in Japan. It's why it Tokyo was one of the first
34:25
offices foreign offices that it set up. And when they are trying to cater to
34:32
that the the Japanese population, they need Japanese language speakers to do the kind of concept moderation model
34:38
curation that the Kenyan workers did for English-sp speakaking models. And so
34:44
there are only so many places in the world that have large Japanese speaking populations and Japan is one of the
34:50
primary ones. And so they will absolutely look for work forces in Japan
34:56
that in more economically vulnerable parts of Japan to recruit those workers and try and replicate the playbook that
35:03
they have used in other for other models in in English language development. And
35:10
they are also dramatically trying to expand the amount of data centers that are being built in Japan. I actually for
35:16
my book I spoke with a Japanese-based data center investor who told me that
35:23
the amount of power that these companies are seeking to try and create largecale
35:32
data center construction is something that that has just never been seen before. And these are these are projects
35:39
that they're not just putting down data centers with existing power
35:44
infrastructure. They are planning where to build more power plants so that they
35:51
have enough power to deliver to these data centers. But we've already seen reporting out of the US that when this
35:57
happens, when you land massive data centers, massive new power um power
36:04
plants in different parts of the US, primarily in rural communities where there's a lot of land, cheap land, cheap
36:11
electricity and things like that, it can lead to really weird distortions in the electric grid. It can erode electric
36:18
resilience, the grid's resilience. it can hike up the utility prices for
36:23
people well beyond just that particular community. And so that is something that
36:28
Japanese residents are going to have to be worry about now is that as these data centers continue to sprawl in places
36:36
that might be hidden from sight from most urban centers, they will still feel
36:41
the impacts on their water utilities, their energy utilities. they might start feeling the impacts on air quality if
36:49
the power plants are fossil fuel-based and that will have ripple effects
36:54
long-standing ripple effects well beyond just the next few years right okay yeah that's that's a great
37:01
insight um so sorry for I I don't want to keep you stay long but the there's a
37:07
final part AI a call for a governance so I think it's the investors the poor
37:15
lots of money into AI startup uh like OpenAI so they can accelerate their
37:21
research and de development and the one of the key investors is Japan-based soft
37:27
bank and so those are the investors um play an
37:33
important role in here. So how do you think investors hold um hold themselves
37:38
accountable for AI development or what or do you mean like what is the
37:44
responsibility that investors have had? Yes. Yes. Um investors have had a huge
37:51
responsibility in facilitating this current this current reckless race in AI
37:58
development. And that is because many of the investors in the AI era are making
38:06
bets that are not based on the the sustainability of the business or
38:13
financials of their investment, but on the idea that they might be able to cash out somewhere along the bubble before it
38:20
bursts and earn back at least their investment. And the problem is, you
38:27
know, I I was just talking with investors this morning that were saying to me that the bubble has gotten
38:34
extraordinarily big and the risk is going to be inherited by the entire economy because some of this investment
38:40
is coming from university endowments, people's retirement funds, um from all parts of the the financial markets. And
38:50
so when the BOA pops, if the investment doesn't get returned, like that is people's retirements, life savings that
38:56
are actually going up in smoke, not just, you know, funny money that doesn't
39:01
doesn't uh doesn't impact people if it disappears. Um and right now we are
39:08
seeing a lot of investors pumping so much money into the development of this
39:13
technology without necessarily getting the returns back in part because there is just so much hype that people are
39:19
really afraid of losing out. And a lot of different funds have realized that if
39:24
they say that they are investing in AI, it is a really great way to amass more
39:29
capital, not necessarily a way to generate more returns, but simply to just fund raise
39:36
faster. And there are significant career risks for investors to go against the AI
39:43
hype train. So there's like all of these different compounding factors that are leading investors to glom onto this this
39:51
investment without necessarily actually seeing smart financial
39:56
without actually seeing the math work out. And unfortunately that has continued to perpetuate the bubble well
40:03
beyond what is sound. And ultimately, if we want to shift towards other forms of
40:11
AI development that are going to be more beneficial and less costly, we do need to
40:18
redistribute the capital first and foremost to those other approaches. And so investors have to move to those other
40:25
approaches and take bold action in moving to those other approaches in order to get there.
40:30
Right. Right. Um so f final part is um about AGI.
40:37
Everyone's talking about AGI. Everyone says that AGI is a few years away but the definition as you wrote in the book
40:44
definition is really vague. So how do we
40:50
think about um how do we get ready for AGI or how
40:57
how do we get ready for the world like that? So I would push back against the idea
41:03
that everyone is saying AGI is a few years away. It's only people who
41:09
could make a lot of money from everyone believing that AGI is a few years away that are actually saying
41:15
that. So, there was a New York Times article recently that the headline was
41:22
why we likely won't get AGI anytime soon. And it cited this stat of a survey
41:28
of wellrespected AI researchers in the field. And 75% of them think that we
41:35
don't even have the techniques yet to develop AGI, if we will ever at all. Mhm.
41:40
And so I think it's really important to not put the cart before the horse and start talking about how would we live
41:46
with AGI and just recognize that we this is not coming anytime soon.
41:52
Scientists don't actually think it's coming anytime soon. They don't even know if it's going to come ever.
41:57
And we should ultimately focus on how can we thoughtfully
42:04
redistribute capital currently, redistribute resources currently to develop AI technologies that have
42:09
nothing to do with the quest for AGI, but just are beneficial applications that are task specific target well
42:17
scoped challenges. Whether that's improving health care by identifying cancer earlier in MRI scans or improving
42:24
drug discovery such as with Deep Mind's alpha fold system which accurately predicts how proteins fold from their
42:31
アミノ酸配列を解明し、2024年にノーベル化学賞を受賞するか、
42:37
より多くの再生可能エネルギーを電力網に統合し、私たちが
42:43
よりクリーンなエネルギーの未来を早めたり、新しい材料を発見したりするのに役立ちます
42:49
エネルギー貯蔵能力の向上です。ええと、これらのAI技術はまさに私たちが活用すべき分野です。
42:56
私たちは時間、エネルギー、そして集中力を注いでいるのではなく、
43:04
いわゆる万能マシンを作ろうとするこの探求は、おそらく実現しないだろうから。それは
43:11
膨大な量の資源が現在、莫大な量の労働災害を引き起こしている
43:16
そして、この技術を構築しようとしている人々が期待する経済的利益は実際にはもたらさない。
43:24
そうなるだろうと言っている。この技術の実績は
43:29
人々に生産性をもたらすという点では、せいぜい中程度です。ええと、それでIII
43:38
実際にAIが機能するAI技術を構築するのを支援する機会が私たちにはあると本当に思っています
43:46
これらすべてのリソースを消費して最終的にAIを構築するのではなく、
43:52
私たちが提供しているのは、全てが生成型AIモデルである必要はない、ということですか?
44:00
まさにその通りです。生成AIはAIのほんの一部に過ぎません。
44:05
技術であり、これまで最もコストメリットが悪かった技術である。
44:11
トレードオフですね。ええ、ええ。そろそろお帰りいただくしかないかもしれませんね。それでは、ありがとうございました。お話できて楽しかったです。
44:18
本日はご参加いただきありがとうございます。お招きいただき、誠にありがとうございます。はい。ありがとうございます。
44:27
[音楽]

最終更新:2025年07月20日 16:17