Interview with Drexel and Kelley on Chinese AI
Restrictions on PRC chips may reduce the probability of an AI disaster
I’m delighted to amplify some work by Bill Drexel and Hannah Kelley on the dangers of Chinese AI development. Bill is an associate fellow at the Center for a New American Security, where he researches artificial intelligence, technology competition, and national security. Hannah is a research associate, also at the Center for a New American Security, where she studies U.S. technology strategy and international technology cooperation.
Bill and Hannah recently wrote an article for Foreign Affairs called China Is Flirting With AI Catastrophe that I highly recommend. They’ve graciously agreed to this interview request. – Joe
CRR: Thank you both for taking the time to share your expertise. Before we dive into things, let’s provide some context. What is artificial intelligence, or AI, and why is it important?
Bill: AI definitions vary widely, but per the Pentagon's definition, it's basically any technology that allows a machine to perform tasks that would otherwise require human intelligence.
Most of what we hear about today is a subsection of AI called Deep Learning, which powers a broad range of systems and allows more sophisticated capabilities like those we see in ChatGPT.
AI is important because it's a general use technology like steam power, electricity, or nuclear power, which means you can apply it to a whole lot of things and it will unlock tremendous economic growth and will open the door to some new military advantages.
Hannah: Building on Bill’s point, AI holds promise for essentially every area of the human experience–from transportation to healthcare to agriculture and climate action, the list goes on. It is both a general use technology, and an enabling technology.
While our piece in Foreign Affairs focused more on the potential for catastrophe in certain contexts and environments, it's also important to highlight the potential promise of AI if harnessed and leveraged in responsible ways. So the stakes are high on both sides of the equation.
AI is also proving to be an increasingly important area for geostrategic competition.
We see this playing out between the United States and China, but even more broadly. Everyone sees it as this revolutionary technology, and naturally, everyone wants to be at the leading edge of that technological revolution.
This can inspire coordination between states, but also exacerbate competition between states.
So it's a revolutionary technology in terms of its technical impact, but also in terms of how it might impact relationships between states.
CRR: Thank you for laying that out. As you all mentioned, there are potential upside benefits to AI. At the same time, it's important to talk about some of the downsides, as you discussed in your article.
So, what are some of the dangers of AI?
Bill: So I think of it in three buckets.
You have dangerous new capabilities. You could think of that in terms of the ability to create new bioweapons that are more sophisticated or new cyber capabilities that could crack a lot of the defenses we currently have in critical systems.
You can also see that in what China has done in creating new capabilities to surveil and control its population.
Then there are technical AI failures, or things you may not mean to misuse AI for, but may nonetheless have negative consequences.
So you might think of a weapon system accidentally mistaking a school for a bunker or something as simple as a self-driving car not recognizing a pedestrian.
Then there is the question of integrating AI into complex systems.
A new technology might create unsafe environments or create vulnerabilities just by how people are relating to it.
You could think of over-dependence on AI and critical systems for weapons, target detection, or various other kinds of complex systems, financial systems, et cetera.
Hannah: Because of some of the dangers tied to the technology itself, which Bill highlighted, there is also—and Bill alluded to this—a lot of danger in coming to rely on AI too much too fast in terms of its integration with other critical technologies.
There are concerns surrounding integrating these systems before AI is “ready”—whatever you define ready to be—as premature integration might expose vulnerabilities in other tech areas that we aren't prepared or equipped to mitigate at pace.
Bill also mentioned some of the dangerous capabilities that AI could herald in for weapons development.
But even on the commercial front, if we aren’t careful with AI and bio for instance, or AI and cyber, or AI and finance, we could end up with issues like the 2010 financial flash crash or the surprising discovery of myriad novel toxins.
There’s a lot we don’t know about how rapidly developing AI will bump up against other critical technology areas, and if some of these bad outcomes have occurred as accidents or unintended consequences, it's scary to think about what an actor choosing to exploit or weaponize these vulnerabilities might look like.
CRR: Hannah, you mentioned running too far, too fast on AI. That risk appears especially acute within China. You both wrote that “The danger of AI accidents is most severe in China.” Why is that?
Hannah: So I think that China’s national optimism towards AI is an important place to start.
We mentioned in our article a statistic that approximately 4 out of 5 Chinese nationals are more optimistic about the promise of AI than they are worried about the risks. Compare that to the United States, where about 35% of Americans are more optimistic than pessimistic.
This AI optimism alone doesn’t necessarily make for a more dangerous environment in China. Rather, it’s the inputs driving that optimism that are truly cause for concern. These include a loose safety culture where risk is outweighed by technological promise pretty much every time, a reckless drive to go where “risk-sensitive Americans” won't, and a long history of chronic crisis mismanagement and state-induced disaster amnesia to avoid public backlash. Taken together, these dynamics make China a hotbed for AI catastrophe in much the same way that the Soviet Union was a hotbed for nuclear catastrophe.
Bill: The only thing I'd add is I've heard it said that America is kind of in its Black Mirror phase with regard to technology, whereas China is still in its Star Trek phase.
You don't have to look very hard to find numerous examples of high tech projects gone horribly wrong in China: you've got the germ line editing from He Jiankui; you've got COVID crisis mismanagement; you've got all sorts of disaster amnesia. It's just a recipe for disaster.
Hannah: I also think it’s interesting how the United States was, at one point, in its Star Trek phase, but because of hard lessons learned, it moved into this Black Mirror mentality. Because there are no lessons learned in Chinese media surrounding these technologies, I don't see that jump happening for China anytime soon.
CRR: In your article, you both mentioned there’s a huge amount of techno-optimism in China that perhaps shouldn’t be warranted, especially given Beijing’s disastrous crisis management performance in recent years. Are there any voices in China warning about the dangers of AI?
Hannah: So as I mentioned, those key dynamics: the boundless ambition, the experimental freedom of having a loose safety culture, the long history of crisis mismanagement, and then that state-induced disaster amnesia all weigh into this statewide sense of techno-optimism.
Because of this, and coupled with the PRC's broader chokehold on dissension of any kind, the number of voices speaking out about the dangers of AI from within the PRC are relatively few.
On an individual basis, it is unclear how much influence those who are speaking out have in the broader conversation. And then at the national level, the moments where China has shown support for standard setting, etc. begs the question, “Is it all for show?” since its actual operations at home tell a vastly different story in terms of values and priorities.
It's also important to note that those who have spoken out in response to other health and safety crises in the past have been harassed, detained, and discredited. We saw this with the doctors and journalists that raised the alarm about the COVID outbreak in Wuhan province. We also saw this with those who flagged the 2002 SARS outbreak, as well as the 1990s HIV contaminated blood transfusions.
So whistleblowers have essentially no protections in the PRC and they also don't get far in terms of actually raising public awareness, which likely weighs heavily into decisions on whether to come forward at all—when the backlash will be great and the impact will be small.
Bill: Yeah, just to give a couple specific examples around AI safety voices in China, Jeff Ding at George Washington University did a roundup of some of the writings on AI safety basically prior to ChatGPT, and they give some indication. Most of the examples aren't available in English, but they seem somewhat peripheral. There are writers there, but it’s unclear how much influence they actually have.
Since ChatGPT, China's tried to position itself as a world leader in AI regulation and beyond. Last month, the Chinese ambassador to the UN made some statements there, and there were a handful of signatures from Chinese nationals on recent statements from the Future of Life Institute and the Center on AI Safety.
But again, it's unclear how much is what they want to project and how much kind of actual influence these writers have. And by all indications, it seems that you know the industry is speeding ahead with its own incentives.
CRR: There are many parallels between negotiations over arms control, AI, and climate change: all three subjects pose existential risks to humanity or, in the case of climate change, near-existential risks to our way of life. The U.S. would strongly prefer to see climate change negotiations separated from other items in US-China relations, such as Taiwan. Beijing has insisted, so far, on linking climate change negotiations to the broader political relationship, bitterly disappointing DC.
Are Washington and Brussels talking with Beijing about the dangers of AI, or is the PRC adopting the same approach it’s taking with regards to climate change?
Bill: I think the biggest difficulty in answering it is that this set of concerns is so new and it's kind of popped on the international stage so suddenly that it's a little hard to tell.
I think one thing is that the CCP knows that the Biden administration really sees climate as a very high priority—that's very well established in their agenda and in their goals. So the CCP is seeking to kind of leverage that.
I think it's unclear to both the Biden administration and to the PRC just how core AI concerns are going to end up being. It seems like there's still a degree of specification that's going on where people are deciding, you know, how highly are we going to prioritize this?
The other x-factor is that, again, China has a really strong motivation to be seen as a world leader, an international leader on AI technology generally, which includes potential regulations.
So I think it's a little bit different from the climate discussion. It might end up turning out the same, but it's probably too early to tell.
CRR: What recommendations do you have for Western policymakers on limiting the dangers of AI, especially with regards to China?
Bill: I think a big part of it is building strong norms and projecting them internationally. But that's pretty difficult. We've struggled to establish strong norms for social media and other tech areas in recent years. So, that's not an easy lift, but it is an important part of the equation.
But I think we can't necessarily expect that even if we succeed in doing that, China will abide by those norms. And I think you probably shouldn't expect it unless there's some way we have really solid verification, but all that's very downstream from what we're talking about now and may come too slowly for it to be useful.
As such, I think we largely need to find ways to monitor dangerous developments in Chinese AI labs. We have precedent for doing this with some of their activities and bio in nuclear and space operations. So we could perhaps build off of that.
I also think it is worth limiting China's access to the most dangerous possible capabilities and the chip restrictions are a step in that direction. If we continue on that trajectory, that will also help, but nothing's foolproof and, again, it's such a fast-developing technology, while the risks are so unknown in a lot of cases, that we just have to be vigilant generally.
Hannah: I’ll just add that while it’s true that setting norms and standards doesn’t necessarily mean everyone will follow them, it does mean that you'll have something concrete to weigh behaviors against—which matters.
Democratic states need to be the ones to set norms and standards around AI to ensure, as much as possible, a net positive impact for humanity. And I think that comes down to leaning into our strengths.
Where Beijing is sprinting with total abandon towards this rapid, and in many cases flimsy, development of largely understudied and unreliable capabilities; where they're sacrificing safety in pursuit of speed and supremacy; and where they're taking greater care to hide accidents or disasters than they are to avoid them in the first place; the United States, together with its allies and partners, have the resources and expertise to build responsible safety frameworks in step with impactful and resilient systems by collaborating on talent, research and development, and sharing lessons learned if and when we do make mistakes.
CRR: What are some good resources for non-specialists seeking to track the latest AI developments? I’ll include links and please feel free to suggest your own work.
Bill: We at CNAS have a whole AI safety instability team that's recently launched. So we're coming out with stuff pretty regularly.
If you're interested in AI developments generally a fun newsletter to follow is The Neuron.
But I'd also say if you're interested in China specifically, Matt Sheehan at the Carnegie Endowment recently released a pretty large report on AI governance in China. Those are where I’d start.
Hannah: In terms of some great background reading, I'll also flag two books by our colleague Paul Scharre. Army of None: Autonomous Weapons and the Future of War looks more specifically at military AI and Four Battlegrounds: Power in the Age of Artificial Intelligence explores the importance of being competitive in data, computing power, talent, and institutions.
CRR: This has been terrific. Thanks so much for taking the time and sharing your expertise. Looking forward to meeting both of you in person soon.
Bill and Hannah: See you soon.
Joe Webster is a senior fellow at the Atlantic Council and editor of the China-Russia Report. This article represents his own personal opinion.
The China-Russia Report is an independent, nonpartisan newsletter covering political, economic, and security affairs within and between China and Russia. All articles, comments, op-eds, etc represent only the personal opinion of the author(s) and do not necessarily represent the position(s) of The China-Russia Report.