How to Fund an AI Safety Empire

How Vitalik Buterin And a “$665M Crypto War Chest” Are Getting Governments Hooked On An “AI Apocalypse”

For quite a while now, the world of AI has been caught in a foundational ideological struggle. On one side, there are the techno-nihilists, the AI “safetyists,” the effective altruists. Theirs is the view that the increasing capabilities of artificial intelligence present a danger to humanity. Ultimately, they believe the invention of artificial general intelligence to be a death sentence, an existential risk.

On the other side, you have the accelerationists, the techno-optimists, the e/accs. Theirs is practically the opposite view - that we must accelerate technological progress, that AI must be allowed to flourish, improve, and grow, and that all persons should have near unrestricted access to the fruits of technology.

Together, the two represent opposite sides of a vast struggle that permeates the AI landscape. And just a few days ago, certain news broke that sent the entire landscape abuzz.

Early yesterday morning, Politico dropped quite an intriguing article. The piece highlighted the significant financial boost received by the Future of Life Institute (FLI) from cryptocurrency mogul and Ethereum founder, Vitalik Buterin back in May of 2021. In a staggering donation of $665 million in Shiba Inu cryptocurrency, FLI's financial resources catapulted and the landscape of AI safety advocacy shifted in the blink of an eye.

Within weeks, FLI - a small nonprofit organization of no more than two dozen employees - had cashed in on the 46 Trillion SHIB tokens, giving it a war chest of around $665 million, putting it on the same standing as heavyweight nonprofit entities like the Brookings Institution and the ACLU Foundation.

When the news broke, it set the entire online AI community into a frenzy. But, unsurprisingly, you might have some questions. Like what is the Future of Life Institute, why is a donation of this caliber such a big deal, and how does this alter the AI policy ecosystem? And where did some of the initial reporting go wrong? If you’ll let me, I can answer all of those questions and more.

AI Safety and the World of Policy

What is the Future of Life Institute? Founded back in 2014 by Max Tegmark and Jaan Tallinn, FLI is a think tank that advises policy makers in Europe and the US with the goal of “steering transformative technology towards benefiting life and away from extreme large-scale risks.” Specifically, their largest “cause area” is pushing the theory that AGI, or artificial general intelligence, poses an extinction-level threat to mankind. They’re best known for last year’s viral letter calling for a six-month “pause” in advanced AI research.

Elsewhere, their work involves plenty of policy advocacy. With efforts in the United States, European Union, and United Nations, they influence policy-makers and inform legislation, like the EU’s recent AI Act. Again, the main mission behind this work is to push the narrative that more powerful, more capable AI systems are going to be dangerous and must be regulated, surveilled, and—if too large for their liking—banned.

For what was once a fringe theory, championed by online bloggers and the occasional academic (looking at you Nick Bostrom), the “AI is going to lead to the apocalypse” crowd has grown exponentially. And FLI is just one of many organizations dedicated to this cause. But there are more.

The “Effective Altruism” movement is chiefly to blame for the rise of this apocalyptic thinking. Supported by wealthy donors, this movement has funded an entire ecosystem. The largest donor of them all is Open Philanthropy. Led by Dustin Moskowitz—billionaire of Facebook-fame—over $330 million has gone to the AI “x-risk” cause in the form of grants that span the full breadth of the AI safety ecosystem.

Altogether, based on data obtained by Nirit Weiss-Blatt, funding for “AI Safety research that is focused on reducing risks from advanced AI (AGI) such as existential risks” amounts to just about $500 million. And that’s before we take into account Vitalik Buterin and what some are calling “Shibagate.”

What’s the Real Story Behind Vitalik’s Massive Donation?

When the news broke of Vitalik’s $665 million donation to Tegmark’s FLI, many on the techno-optimist side of the AI tug-of-war were rightfully dismayed. After all, just a few months prior, Vitalik had waxed at length in a blog post about his commitment to what he called “d/acc,” a fork of traditional e/acc principles that focused more on the benefits of decentralization, defense, and placed him squarely in the camp of those quite concerned, but tentatively optimistic, about a future in which we reach AGI.

But such a massive financial gift to FLI was disheartening to many. While it’s not anyone’s place to say when/where Vitalik should spend his own money, many were rightfully critical. How does enabling one highly techno-pessimistic institution—with an enormous war chest of cash—to dominate the entire AI policy ecosystem align with decentralization and techno-optimism?

Let’s go over the facts, because it seems that much was lost in translation.

In May, 2021, Vitalik Buterin burned 80% of his SHIB cryptocurrency holding. He planned to use the remaining 20% for "long-term charitable causes." and moved 46 trillion tokens to FLI. But Vitalik didn’t expect how much it would be worth in the end.

In a tweet, he clarified, saying “I quickly sent a pile of SHIB, thinking it would surely drop 100x in a few days so I had to act fast, and expecting they would be able to cash out at most like $10-25m.  But of course SHIB massively outperformed my expectations…”

In the coming weeks, from May 19 to June 4, FLI used FTX to liquidate the SHIB tokens - to the tune of the aforementioned $665 million. It’s also worth noting that at the time, FLI wasn’t as hardcore “doomer” as it is today. Instead, its focus was much more diversified, covering other theorized existential risks, from biology to nuclear weapons, just as much, if not more, than the AI question.

So how should we view Buterin’s donation in light of this clarification? Well, it still likely depends on your stance with regard to the larger question at hand – is the FLI’s work something to cheer for or against?

“Technology is Amazing, and There Are Very High Costs to Delaying It.”

The Future of Life Institute is arguably the highest profile organization fighting for stringent regulation on AI. Max Tegmark, its president and co-founder, provided testimony at a Senate AI forum last autumn. The United Nations selected Jaan Tallinn, billionaire co-founder of Skype and FLI board-member, for its newly established AI Advisory Body in October.

FLI also heavily influenced the European Union’s AI Act, where their lobbyists were instrumental in adding harsh regulations for foundational AI models, like classifying models above certain compute requirements as “high-risk.”

And now, with their crypto-backed war chest, they will singularly dominate the entire AI policy landscape. Whether by direct spending on lobbying, spinning off smaller organizations and funding their efforts, or by pushing apocalyptic narratives in the press, FLI’s newfound power makes it all the more likely that governments and politicians around the world will work to restrict technological progress – not advance it.

And so, whether or not Vitalik meant it, he created a monster. One which will continue to lead the EU, the US, and potentially the entire world, down the wrong path.

Voices urging politicians and public alike to reconsider the massive benefits of AI will instead be drowned out by those sounding the alarm. In his d/acc blog post, which came long after the donation, Vitalik clarifies that “technology is amazing, and there are very high costs to delaying it.”

At this point, one can only hope that Vitalik, and others like him, will take an alternative approach this time around. One that funds science and technology. One that funds projects like Neuralink, that are giving paraplegics their freedom back. One that funds projects that work to cure cancer with AI. One that funds moonshot technological projects with the hope of propelling humanity forward, instead of chasing unproven demons.

I sure hope that’s the case.