By Tyler Fisher in Journalism on the internet – Feb. 1, 2024

Anti-scale: a response to AI in journalism

Journalism can no longer chase Silicon Valley's tail in hopes of salvation. We need a self-determined vision for what journalism on the web is, who it is for, and how we build it.

A neon pink jagged line pointing downwards
Photo by Ussama Azam on Unsplash

Let's start here: Gallup's annual media trust survey said in 2023 that only 32 percent of Americans trust journalism "to report the news in a full, fair and accurate way." People don't trust journalism. That's bad enough, but by any conceivable metric, journalism's foray online has failed. Consider journalists employed, revenue or counties with no news source: all have gotten markedly worse over the last two decades.

Each year, fewer people produce journalism and more people tune out entirely. As the industry has declined without fail, news organizations have flailed trying to find the magic solution that would solve all their digital woes. Today, that savior is generative AI.

In reality, generative AI is just another hyped solution from a tech industry with no interest in a healthy civic media ecosystem. However, unlike many earlier supposed saviors, generative AI presents existential risk. With the technology's ability to create plausible but often false content at unprecedented scale, AI-generated content threatens to overwhelm the web with bullshit (see Harry Frankfurt).

It is now or never for the journalism industry to set its own course on the web. Journalism can no longer chase Silicon Valley's tail in hopes of salvation. We need a self-determined vision for what journalism on the web is, who it is for, and how we build it. Instead of embracing AI and competing with scale, I propose the anti-scale approach to technology in journalism.

Why generative AI is not the answer

Silicon Valley will say that generative AI can fully automate journalism, both reducing costs and producing better content. It cannot. Generative AI, specifically large language models (LLMs) like ChatGPT, are designed to predict the next word in a sentence. That makes for plausible but not necessarily accurate text. This is the "hallucination" problem, where the language model makes up bits of text that sound right but have no basis in reality. Recently, this showed up in Michael Cohen's court case when he used an LLM that generated false legal citations.

It should be obvious that any technology prone to making up facts is a bad fit for journalism, but the Associated Press, the American Journalism Project, and Axel Springer have all inked partnerships with OpenAI. According to these deals, providing LLMs with more accurate information as part of its training set will produce more accurate responses.

I don't believe any amount of accurate information provided to an LLM meaningfully eliminates the hallucination problem, but let's assume they're right. Let's assume that in 2024, with access to a real-time feed of accurate news and information through a series of content partnerships, OpenAI releases GPT-5, a new LLM that never hallucinates and always gives up-to-date, accurate answers. Even in this world, generative AI still does not solve the most crucial problems facing the news industry.

Remember, less than a third of Americans trust journalism to report fairly and accurately. Combine that with another recent survey result: the Computing History Museum and Embold Research found that 80 percent of Americans are "concerned" by news organizations using AI. When asked about specific tasks news organizations might use AI for, like to "write news stories for human review", or "to fact-check information for news articles," a majority of Americans were not comfortable with any task except for one: "translat[ing] news content into other languages."

Even if this survey is off by ten percentage points, the numbers are overwhelming: Americans do not want AI-generated news content. If we consider trust a baseline metric for our industry, surely journalism can't jump into bed with a technology that an overwhelming majority of Americans distrust, especially with the industry already at such a trust deficit.

Let's speculate further. Perhaps in my hypothetical GPT-5 world, people become more trusting of AI-generated content when they see it is more dependable. This is where we get to the crux of the matter. Look at the media that people respond to today. Nearly as many people subscribe to a Patreon creator as subscribe to The New York Times, and that's just one individual creator-driven platform. TikTok's whole appeal is an endless feed of people on camera. People are looking for connection on the web. A journalism industry that moves away from people and connection, towards automation and anonymity, is one that can't bridge the trust gap. Journalism is, at its best, humans telling human stories.

To be clear, LLMs have useful capabilities, though the chatbot UI paradigm is not likely to surface them very well. Imagine the best possible version of Grammarly, able to restructure and simplify language for a writer struggling to explain a difficult concept. Already, AI tools are making transcription a painless task. Designer Maggie Appleton has written about how LLMs can help us, and I highly suggest reading her work. As a targeted and limited tool, LLMs can help, though we must ask if they are worth the exorbitant energy cost and privacy concerns. As a panacea for what ails journalism, they can do very little. In fact, they likely do more harm than good.

Against scale

Generative AI falls in a long line of journalism panaceas that promised salvation through scale. More content, more readers, more revenue, just more. The industry has spent years optimizing its editorial workflows, publishing practices and philosophies around the idea of scale. As its proponents would have it, generative AI is just the next logical step in efficiency.

In fact, generative AI promises to destroy scale-driven strategy. It is now trivially easy and cheap for anyone to produce online content. Any attempts to compete in scale while still maintaining journalistic integrity will inevitably fail in the face of endless fraudulent content with no scruples. Consider this simple cost analysis from Maggie Appleton:

ChatGPT currently costs a fraction of a cent ($0.002) to generate 1000 tokens (~words), meaning that it would cost only two cents to generate 100 articles of 1000 words. If we used a more sophisticated architecture like prompt chaining the cost would certainly be higher, but still affordable.

Given that these creations are cheap, easy to use, fast, and can produce a nearly infinite amount of content, I think we’re about to drown in a sea of informational garbage.

We’re going to be absolutely swamped by masses of mediocre content.

Every marketer, SEO strategist, and optimiser bro is going to have a field day filling Twitter and Facebook and Linked In and Google search results with masses of keyword-stuffed, optimised, generated crap.

Instead of trying to compete, journalism must reject the scale-driven paradigm in favor of deeper connection and community. Rather than embrace the scale promised by AI-generated content, journalism must distinguish itself through what Jennifer Brandel and Mara Zepada call "A.E.", or "actual experience":

Actual Experience is the human, three-dimensional, beating heart, sweaty side of life. Here, defining terms is useful:

Actual: existing in fact; typically as contrasted with what was intended, expected, or believed.
Experience: practical contact with and observation of facts or events.

Developing some kind of automated AI news service is not going to push the needle on growing trust in the media. If anything it adds to the backsliding.

But what has been proven to increase trust in information is human, face-to-face interactions and relationships.

For Brandel, this means moving relationship-building forward as a central job to be done in news through active participation in communities and local events, but she concedes that "AI might be coming for many of the jobs that journalists used to do," leaving "technical, transactional and AI-related jobs" in the industry to maintain those efficiencies brought about by AI.

As stated above, I believe AI will not be helpful but harmful to journalism, thus leaving technologists in news open to participating in the building of actual experience. Technology can be a core contributor to creating deeper, more trustworthy, fulfilling products for people. To get there will require a significant reframing of what we do and why.

The anti-scale approach to technology in journalism

Instead of embracing AI and competing through scale, the anti-scale approach to technology in journalism recenters the practice on helping people find information and solve problems. It might sound obvious, but as technologists, we lost sight of our purpose somewhere along the way of measuring pageviews, recirculation, click rate, funnels and churn. We call people users. We measure loyalty like a feudal lord. Our technology revolves around getting users in the door and locking it behind them. It's more important that we get an email address and a few seconds of a video ad view than it is that people get the information they need. We have to stop chasing tens of millions of pageviews and think that means success.

Communities are fractal, and individual information needs are specific. One-size-fits-all approaches will not work to meet the information needs of a diverse public. Our technology needs to help journalism meet people where they are, to more easily adapt to rapidly changing digital contexts.

How do we do this in 2024? As always, there are no easy answers. I'll be spending my year thinking about it, and here are some early questions I'm asking:

  • How might we build tools to help journalists find and foster digital communities amid a fractured social media ecosystem? Our social networks are fragmenting. Different people want different experiences from their social media. There's the endless dopamine drip of TikTok's rapid-fire algorithmic feed. There are hours-long videos on obscure topics on YouTube. There's the complete control over data, moderation policies and safety measures on Mastodon. Journalists need to build communities, have conversations, and communicate their journalism in ways that feel native to these platforms. With more platforms and formats than ever, they need tools to make it easier.
  • How might we build products that differentiate good journalism in an ocean of AI-generated content? AI-generated content is cheap and fast. Good journalism is expensive and slow. This reality is why I'm pitching an anti-scale approach. But often, our slow and expensive journalism is published on a product that looks no different from the cheap stuff. When real articles are indistinguishable from fake, are 800-word articles with an image at the top still the right atomic unit of journalism? Can we communicate information in a better way that feels inherently trustworthy?
  • How might we quantitatively measure the impact of journalism? This is admittedly an age-old question, but one we're no better at answering. The metrics that drive digital journalism come from the scale-driven era. Pageviews, time on site and click rate are all metrics that help you optimize for more. The anti-scale approach cannot be innumerate, but we need new measurements. What metrics do we need for an anti-scale approach?

I'm on Mastodon and Bluesky. Let me know: what do you think about the anti-scale approach?

Sign up for the newsletter

I’ll email you whenever I write something new. Which is pretty infrequently.