AI, Governance, and the Future of Humanity

AI, Governance, and the Future of Humanity

AI, Governance, and the Future of Humanity 275 183 Jamie Metzl

If you weren’t previously paying attention to AI, I am guessing now you are.

It’s hard for most people to imagine the ChatGPT bot was only released to the public last November. Today, ChatGPT and systems like it are everywhere and on most everyone’s minds. But if you think systems like ChatGPT are just better versions of Google search, you are missing the point. A better analogy is electricity. How did electricity impact your life today? It wasn’t just that your horseless carriage moved without a horse.

I recently joined AI tech leaders signing a statement asserting that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” even though I strongly believe this prognosis is highly incomplete and insufficient.

A basic challenge we face is that AI systems are getting smarter at exponential rates while our biological brain capacities remains constant and our collective cultural evolution and governance systems moves slowly. This creates potential upsides and downsides.

The upsides are helping us do much of everything better, including healthcare, agriculture, manufacturing, space exploration…

The potential downsides include misalignment between AI systems and us and, more likely in my view, humans using AI systems in dangerous ways to gain advantage over others.

We’re already seeing that today. If you are a minister of defense, not investing in developing autonomous killer robots would likely ensure your country will lose its next conflict with an adversary imposing few restrictions on itself and its machines. But if you do, you are adding to the existential risk facing humanity.

But the AI risk is not a singular one, as the AI experts note. It’s the same issue as with pandemics, WMD, and much else.

That’s why the idea of a global AI body modelled after the International Atomic Energy Agency (IAEA), although a decent enough idea, is wildly insufficient at best and a dangerous distraction at worst. It’s not just that the nuclear non-proliferation regime is itself breaking down, with many new nuclear weapons states and many more on the horizon. Even more, the core issue is the way our world is organized.

As I have long argued, the greatest challenge we face is that while our biggest problems are global and common we don’t have a sufficient framework for addressing that entire category of problems. AI is just another one of those problems. So are climate change, militarizing space, biodiversity, pandemics, WMD, global systemic poverty, and much more. If we don’t identify and address that problem, there is no way we can sufficiently address all of its individual manifestations. Our world’s global operating system needs an upgrade based on a recognition of the mutual responsibilities of our complex global interdependence.

If we spend all of our energy creating another equivalent of the IAEA or WHO without asking why those institutions are not able to achieve their goals, we will simply replicate our collective failures in yet another domain. I’m a big fan of those organizations, but we shouldn’t deceive ourselves.

That’s why I and others founded OneShared.World in the earliest days of the pandemic. I hope you will read our Declaration of Interdependence, which has been translated into 20 languages).

I also explore this at length in my new book, The Great Biohack: Recasting Life in an Age of Revolutionary Technology. The book is already written but won’t come out until next May (because the book publishing process is slow).

As I argue in the book, trying to tackle one global problem at a time is like generating a vaccine for each individual flu strain. Far better to identify the common strains among all flu viruses of concern and target that.

A global OS upgrade is a tall order. We’ve done it before in 1648 and 1945 after the Thirty Years War and WWII, respectively. We need one again now. Far better to do it to prevent a deadly cataclysm than in response to one. The COVID-19 pandemic should have been a wake-up call but we did not wake up. I hope you’ll read this CNN editorial which I wrote in 2020. It’s sad to me that we’ve made so little progress.

But the current focus on AI is giving us another opportunity to think differently and work proactively to optimize benefits and minimize harms associated with our collective superpowers. To succeed, however, we need to make sure we are focusing on the right issue. If we make this a narrow struggle about AI governance alone, we will miss the broader context and fail.

There can ultimately be no sufficient AI governance outside of the context of broader global organization. We simply cannot succeed in AI governance leaving everything else about how our world is organized as is. Narrow AI governance, like climate change, WMD, etc., is an easier problem than the problem I have identified, but our goal must be achieving our goal, not engaging in time-wasting theatrics that ultimately can and will not work.

I’m all in for governance at all levels, as we outlined in our WHO advisory committee on human genome editing report, but governance is all about operating systems and contexts.

I outline in my book what we can and do to meet this challenge and will post extensively on this in the future. I also examined this issue in my keynote address to the World Government Summit in Dubai earlier this year.