Saving the State of AI Policy in DC

An Interview with Brian Chau, Exec Director of the Alliance for the Future - the New Pro-AI Think Tank in DC

Even when you have a winning hand, it’s hard to win if the rules of the game are unfairly stacked against you. This seems obvious. But in our fight to accelerate the development of capable, powerful, and life-changing artificial intelligence, we seem far too content to let it happen in the political arena with our ideological opponents. We’re ceding ground. As it stands, though, we can’t afford to cede an inch. So why are we fighting with one hand tied behind our backs?

The ultimate goal, as I see it, is to accelerate technological progress. This is especially true for the development of AI. And many of us feel this way – we are pro-tech, pro-AI, and we believe that it will change our world for the better. But there are also those that don’t feel that way. Much to the contrary, there’s an entire faction of individuals and organizations that want to slow or altogether halt the development of AI—and unfortunately, they’re the ones winning the battle in Washington D.C.

As has been pretty extensively reported on by this point, there is a massive, well-funded, and largely coherent lobbying effort being carried out by the “billionaire-backed effective altruism (EA) movement that has made preventing an AI apocalypse one of its top priorities.” Open Philanthropy - itself primarily funded by billionaire Dustin Moskowitz - funds the salaries of dozens of AI fellows scattered throughout key congressional offices and federal agencies, and has backed the creation of influential D.C. think tanks, like the Institute for AI Policy and Strategy (IAPS) or the Center for AI Safety (CAIS).

When the Biden administration formulates critical new policy regarding AI, it’s these forces that act to tip the scale. And so, while thousands of technologists across Silicon Valley and the rest of the country “just build,” the game is already being lost in the halls where power resides. The policy-making process remains captured by EAs. That is, however, until now.

Enter the Alliance for the Future. The AFTF is a new nonprofit in Washington D.C. that is bringing together entrepreneurs, technologists, and policy experts to advocate for progress in AI and prove to policymakers just how important it is that we don’t slow down. About a month ago, right when we needed them most, the AFTF burst onto the scene ready to “oppose stagnation and advocate for the benefits of technological progress in the political arena.” And they’ve put together an all-star team to do so.

Leading the team is Brian Chau, a former machine learning engineer with a background in pure mathematics. He now serves as AFTF’s Executive Director, working to guide the organization and achieve its ultimate mission. Alongside him is quite an impressive group, to say the least: Perry Metzger, a seasoned computer scientist, entrepreneur, and consultant, acts as Chairman of the Board. Meanwhile, Jon Askonas, Assistant Professor of Politics and senior fellow at the Foundation for American Innovation, brings to the team plenty of expertise in national security, defense technology, AI, and philosophy.

Another integral member of the board is none other than Guillaume Verdon—known to many as @BasedBeffJezos—founder of the AI hardware startup, Extropic, and of course, one of the key figures behind the Effective Accelerationism (e/acc) movement. Rounding out this formidable lineup, Dr. Keegan McBride serves as an advisor on government AI use, bringing his extensive research on disruptive technologies from the University of Oxford's Internet Institute to the forefront of policy discussions.

Altogether, the AFTF represents the beginning of a new wave of institutionally-minded actors looking to make real headway in Washington D.C. For far too long now, our nation’s capital has been the domain of “decels” and “doomers.” Now, it’s time for pro-AI voices to be heard. You might ask, ‘but how?’ What does a think tank like AFTF do? And what policies are they advocating?

So, I sat down with Executive Director Brian Chau to answer just that. In a fascinating, hour-long interview, we touched on everything: from politicians’ skewed misunderstandings of the technology-at-hand to the correct approach for regulating AI, from how AI is the “free speech issue of the decade” to how AI will encourage healthy work—not a jobs crisis. We got deep into the weeds on all things AI and how the Alliance for the Future aims to take back the state of technology policy in D.C. The following is an edited transcript of the interview, trimmed down quite a bit for clarity and to capture the best, most intriguing parts of our conversation:

Spor: “So, what does the day-to-day work on-the-ground look like for the Alliance for the Future? What does a think tank really do?”

Brian Chau: “For sure, so we mostly work with the legislative branch at this point in time. Essentially, how it works is that there’s an informative stage, a drafting stage, and a voting stage [for legislation]. Almost all ML policy is still at the informative stage, which involves people, like Sam Altman and others, going to testify in front of specific committees, presenting evidence, and setting the baseline expectations and factual understanding of machine learning, which in my opinion, is extremely poor. And so, getting that baseline understanding through [to policymakers] and framing it in an optimistic, pragmatic, and nonpartisan way is what we’re trying to do at the current moment.”

Spor: “And what do you think our politicians misunderstand most about AI?”

Brian: “It’s a cliché, but a lot of sentiment is being shaped by headlines. And those headlines are overwhelmingly negative - ‘if it bleeds, it leads.’ People committing fraud, deep fakes, misrepresentation - those are things that are real problems, sure, but there’s not as much of an understanding of the immense benefits of artificial intelligence. It’s not just ChatGPT, [AI] is used by farmers to check if their crops are ripe, it’s used by manufacturers to automate their machines, and so much more. Having this full picture of the economic benefits [of AI] is something that will be hugely beneficial in getting them to have a much more optimistic perspective.

And on the technical side, there are a few misunderstandings. Number 1 is [the need for] further clarification of the training process—essentially that fine-tuning exists at all. There are several stages [of training] and these latter stages are what governs the ultimate political views [of models]. This is something that isn’t evident from many of the narratives surrounding ML, certainly on the issue of political bias with ChatGPT or Google Gemini. This clarification is one that I've made fairly often and that the staff of representatives and Senators are certainly amenable to hearing”

Spor: “What is the ‘Coming Overreach’ you talk about in the AFTF manifesto? And how big is the threat of regulatory capture, would you say?”

Brian: “DC is full of people that want to fit every new issue into their existing portfolio. You have people that are concerned about disinformation, racism, automation and union jobs, and they will always try to cram any new technological change into their existing agenda. In general, there’s this myopia. This is where I think the overreach will come from. The net income of their policies will overwhelmingly be to take away the positives—the benefits that we can all have of cheaper goods, being able to more with your time and money, etc.

At this moment, more broader regulatory capture actions would require additional legislation to be passed. This is unlikely until at least the next election because there is a partisan split in Congress. In the long-term though, regulatory capture is a significant threat. And it may end up being the case that these big, legacy corporations [like Google or Microsoft] end up being the tie-breaker in tipping the scale in support of policies that would be beneficial to themselves.”

Spor: “You talk a bit about national security and the divide between public and private in the Manifesto. How is AI a ‘dual-use technology’ and how does this inform how we regulate it?”

Brian: “Essentially, the idea is that some technologies can be used in both military and civilian life, and most digital technologies fall into this category. Historically, the strength of America is that these things are developed in conjunction with both the public and private sectors. If we’re to look at a regulatory approach that continues this – like strategic cybersecurity investments, or strategic funding in a DARPA-like program – that’s a way in which policy can be used to make the way we use these technologies more American, not less. And this is the technology that we’ll need to beat China, to beat Russia. It’s better to have this technology as a country, than not.”

Spor: “How is AI the ‘free speech issue of the decade’?”

Brian: “It absolutely is because it’s not just something that shapes interpersonal communication, but also shapes someone’s internal process of coming up with ideas – you can see this already with people using it as a kind of reference or to draft future ideas, like a self-management tool. What’s crucial about this is that there’s an attempt not just to censor the outer, interpersonal world, but to censor the internal world in which people draft their thoughts. This is extremely dangerous. And if there is indeed government coordination, (see the pending Missouri v. Biden case) it might be unconstitutional as well.”

Spor: “Something else I was really hoping to get your take on is open source. What is the Alliance’s stance on open source AI?”

Brian: “Open source is one of the most crucial deployments of machine learning that there can possibly be—in effect, it becomes a much more well-scrutinized and well-developed system. Second of all, it’s an immense public good. These early stage training runs are enormously compute-intensive and many companies will simply not have the money for that. Open source companies like Mistral, open source models like Llama, are a huge benefit to these people.

And in terms of free speech, code is protected under the First Amendment. In many cases, open source will be the answer to some of these political problems—problems of censorship—because it is itself a technology that is difficult to censor. So, I see open source as this huge public resource that merges many of the best aspects of public works, public goods, as well as a real spirit of innovation.”

Spor: “How do you see AI affecting work?”

Brian: “This is one of the concerns that I take most seriously. The past twenty years of automation has shifted work away from blue-collar jobs to office work, work that is much less enjoyable, and the question is: is machine learning going to follow that short-term pattern or the long-term pattern? Because the long arc of automation is not necessarily negative; in many ways it’s been positive.

Will AI be more like the last twenty years or will it be like the last, say, 100 or 1000 years? I would say that in many cases, the things that AI is automating are the things that people despise the most about their work – things like paperwork compliance, tax filings; really, the sort of procedural generation of text, right? It almost sounds too good to be true.

I don’t think it’s that clear that automation will be negative, in this case. In many cases, instead, it will be actively beneficial. In fact, people will be happy they won’t have to do these things and will be able to do more productive things with their time. Leaving aside the long-term, in the short- and medium-term, I think it’ll be overwhelmingly positive.”

Spor: “What do you think are the biggest challenges you’ll face in trying to advance pro-tech, pro-AI policy in DC?”

Brian: “If we’re able to keep a steady state of policy with a split Congress, it’ll be quite straightforward and procedural to defend against the threats against machine learning. The thing that keeps me up at night is this ‘politics of emergency’ that you see recur throughout history. When there’s a news cycle that picks up on just the right emotional tone, disconnected from reality, this wave of mania can sweep both parties. In many cases, the most permanent and destructive policies are put in—and never reversed—in such an age of hysteria.

If the broadly pro-tech majority are not organized – and more specifically, the people that have a vested interest in tech – if those people are not organized, they can be easily overcome by this kind of mania. But having the right people in positions of authority and in media to push back against this—this kind of insurance policy is what I’m trying to build here.”

Spor: “Nuclear seems like an apt analogy here.”

Brian: “Oh, absolutely. The instigating incidents for the de-facto ban of nuclear reactors in the past 40-50 years were the meltdown of old reactors with substandard technology, with poor practices, often in foreign countries, too (Chernobyl). And these false analogies, these arguments from headlines that are not indicative of the current state of things, that’s an excellent parallel to what we’re facing in machine learning today. And I think the lost benefits will actually be far greater – if anything the nuclear comparison doesn’t go far enough.”

Spor: “Finally, how can people support the Alliance for the Future?”

Brian: “The number one thing is to let a friend know. Talk about how you’re using the technology in your everyday lives, donate if you have the resources, or just help share and build a broader community around being optimistic about AI. You can go to https://www.affuture.org/donate/ to donate and we have several newsletters – From the New World and Data Point. You can find us on Twitter at @psychosort and @aftfuture. People can reach out on Twitter or [email protected].”

Thanks for reading and be sure to check out the Alliance for the Future, read their Manifesto, and support them in any way you can!