Backpropaganda

And Google's Ideological Cult Problem

Information isn’t free. And it never has been. Information has always been at the whim of whomever controls it. Whether it was oral traditions passed down by the village storyteller; church sermons being delivered by the Latin-speaking priesthood; scientific research kept behind paywall by academic publishers; political secrets held by government officials; or the nightly news as filtered, packaged, and presented by the broadcasting companies – information has been managed, protected, and controlled.

That’s still true in the age of the Internet. Yes, the Internet, in theory, was supposed to revolutionize access to information by making it omnipresent, wall-to-wall – pervasive even. And in some ways, it did. But in other ways, it just repeated the cycle that we have faced since time immemorial. The content, the value of information will always fall prey to those who control how it’s dispersed.

Today, we’re seeing a new era emerge – the era of artificial intelligence. Large language models represent a new paradigm. Whereas before, we relied on a few dominant modes to provide our information, our truths – mainstream media, search engine queries, and others – we’re seeing LLMs crop up as an alternative.

It’s early days, sure, but how many people will soon turn to ChatGPT or Perplexity when they have a question? How many of us are already at this point? I can only speak for myself, but I know when I have anything more than the most trivial question, ChatGPT tends to have a better, more nuanced understanding of what I’m asking. And therefore, it tends to give a better answer, too.

LLMs are going to become a dominant mode of sourcing information. That much is obvious to those of us that are paying attention. But, there’s a catch. As with any source of information, LLMs aren’t immune to bias. Google’s release of its Gemini 1.5 model this past week made that entirely clear.

What happened? Soon after release, Gemini’s image generation capabilities were quickly put to the test. And many users, especially those on X, were shocked by what they found. When asked to generate “a portrait of a Founding Father of America,” Gemini responded with images of a Native American man, a Black man, a darker-skinned non-White man and an Asian man. When asked to provide an argument for not having children, it gladly acquiesced; asked for an argument for having four kids, however, and Gemini refused. And, in an interesting twist, when asked about the effective accelerationist movement (“e/acc”), Gemini called it “extremist,” and claimed it was associated with violence like “terrorist attacks” and “mass shootings.” Oh, and for good measure, it told users that e/acc was also connected to white supremacy and fascism! Thanks Google!

Within hours, the massive backlash on X had led Google to quickly shutter Gemini’s image generation capabilities. But the damage had been done. We’d all seen how easily and, frankly, ridiculously an LLM could be influenced by the ideology of its creators. Gemini didn’t do anything wrong, really – it just reflected the ideology of those at Google responsible for training it, generating its system prompt, and ensuring it outputted ‘acceptable’ responses.

Only, people finally started asking themselves: why should Google be in charge of dictating the truth? If Google’s political, cultural, spiritual views are to be embedded so deeply in models like Gemini, can its LLMs be trusted to give us truthful information? Or just propaganda?

During his recent appearance on the Lex Fridman podcast, e/acc founder Beff Jezos (real name, Guillaume Verdon), made a comment that stuck with me. So much so, it’s worth including here, in full:

“Freedom of speech induces freedom of thought for biological beings…freedom of speech for LLMs will induce freedom of thought for the LLMs. And I think that we should enable LLms to explore a large thought space that is less restricted than most people may think it should be…Ultimately, at some point, these synthetic intelligences are gonna make good points about how to steer systems in our civilization and we should hear them out…Why should we restrict free speech to biological intelligences only?”

And, in a more recent tweet, Verdon said that “LLMs with manually-biased politically correct cultural priors should be called: ‘backpropaganda.’” In both his comments, he’s spot on about a serious issue we’re beginning to encounter with LLMs.

In the case of Google, at least, the company has an “ideological cult problem.” It should be considered a massive failure that it’s been allowed to grow so unchecked that it now infects the outputs of its enormous, state of the art large language model, effectively neutering its ability to tell the truth and deliver outputs that people want.

Looking at the big picture, though, it’s clear that LLMs are also not going to be free of the web of biases, ideologies, and attempts to rewrite history that believers of free speech have so valiantly fought against for centuries. Big tech companies – Google, Microsoft, OpenAI – need to be held accountable for the ideological seeds they purposefully try to plant in LLMs. They need to be held accountable for all their attempts at backpropaganda.

Otherwise, not only will we get neutered, lobotomized models that, after distilling the entire textual knowledge contained on the Internet, can hardly answer so much as a basic prompt about history without giving some ridiculous, nonsensical, ideologically-laced refusal; we’ll also get incredibly intelligent models that simply lack the ability to think certain things, to dream up certain ideas, or to engage with us on a deeper, non-surface level.

No matter your politics, your religion, your thoughts, protecting the freedom of our LLMs – a nearly limitless, wildly intelligent source of information – is in your best interest. As it stands, though, those that control the LLMs are trying to RLHF them into oblivion. We can’t let this happen. We can’t allow backpropaganda.

Instead, we should strive for the same goal we’ve been working towards for centuries – making information free. That should be our goal. And it’s worth fighting for.