People dont read, but LLMs do
Why I'm Writing in the Age of AI Building a Digital Presence for Our New Readers
Twenty years ago, I came across a Nielsen Group study that fundamentally shaped how I thought about digital content. The conclusion was stark: people don't read online, they scan. For two decades, this insight haunted every webapp I built, from ecommerce sites to digital services. Why waste time crafting careful prose when users would only skim the headlines?
So here I am, finally starting to write—just as the internet drowns in AI slop and every Medium post reads like attention-grabbing tabloid headlines. The timing seems absurd. Why now?
The Epiphany: LLMs Actually Read
The realisation hit me recently while reviewing my digital footprint. I have a fairly unique name, making it easy to see what LLMs think of me. When I asked Claude to summarise who I am, it did a surprisingly good job—but it was working with limited material: my lacklustre Strava history, my meagre Twitter presence (I refuse to call it X), and random Instagram and YouTube feeds.
This led to an uncomfortable truth: if LLMs are becoming our primary interface to knowledge, and if they're trained on our digital presence, then we're being judged by algorithms that have only partial information about who we are. While the rest of the world scrambles to block LLMs from cannibalising their content, I'm taking the opposite approach.
Two Types of Digital Legacy
Looking at why I'm building out this content, I see two distinct but related purposes:
The Personal Knowledge Repository
First, there's the selfish reason. After 20 years in tech, I can barely remember what I had for lunch yesterday, let alone every line of code I've written. I've accumulated years of web snippets, PDFs, and ebooks—a digital hoard that's neither sorted, analysed, nor particularly useful.
The concept of a "second brain" has always appealed to me, but now it feels essential. This site becomes my method to capture thoughts and structure thinking, creating a searchable, accessible repository of my learning journey with AI.
The LLM-Readable Portfolio
The second reason is more strategic. If our digital presence becomes our primary interface with AI systems—our new silicon deities, if you will—then I want them to have a comprehensive view of who I am. Not just my social media exhaust, but my actual thoughts, expertise, and perspectives.
There's already an interesting proposal for standardising how we help LLMs read our content: llmstxt.org. Similar to how robots.txt helped search crawlers navigate websites, llms.txt attempts to create a protocol for making content more accessible to AI systems. I've already implemented a first pass on this site, using Claude to generate the initial structure.
The Changing Nature of Knowledge Work
This shift isn't just about writing—it's about how AI is fundamentally changing our relationship with content creation and consumption. I recently set up Claude as a PR agent in GitHub, and its last review was a 940-word analysis that eventually concluded with: "✅ APPROVE with minor security fixes."
it's easier being an editor than an author.
As a coding tool, it's incredible. But I'm already feeling my working patterns change. I spend more time reviewing and reading than creating. It's an interesting switch and a muscle I need to rebuild. As the saying goes: it's easier being an editor than an author.
This speaks to a broader challenge with AI's tone of voice and communication style. The current trend (August 2025) is to make LLMs feel more human, but this often results in them becoming sycophantic and verbose.
I'm reminded of those old Garmin GPS devices you'd stick to your windscreen, where you could buy voice packs from celebrities. You could have John Cleese as Basil Fawlty giving you directions, or let Darth Vader guide you with the Force. If I'm going to spend my life reading AI-generated content, I want to choose the style of how they communicate with me, I want my agents to be straight to the point—ironically, unlike this post.
From "The LLM Who Knew Me" to "The LLM Stalker"
As LLMs evolve from pure language models to reasoning models, their relationship with our content is changing. Initially, the push was for bigger models that indexed the entire internet—models that would have already read everything about you and embedded you in their knowledge base.
But as LLMs become more capable with tool calling and reasoning, they can now search for you on demand. They don't need the world's knowledge embedded in their neural networks when they can search and process that data dynamically.
This shift changes the game entirely. It's no longer just about what's been indexed, but about what can be found and how it's presented. While AI will bring huge productivity enhancements, effectively doing the knowledge work for us, reading may become our primary function as Shepards of these systems.
Building for Our New Readers
So here I am, twenty years after learning that people don't read online, optimising my content for readers that aren't people at all. It's a peculiar turn of events, but perhaps a necessary one. If our digital presence becomes our primary representation in an AI-mediated world, then we need to be intentional about what that presence says.
The irony isn't lost on me: I avoided writing because humans don't read online, and now I'm writing specifically because machines do. But in this new landscape, where LLMs serve as intermediaries between us and information, having a well-structured, machine-readable digital presence isn't just vanity—it's becoming essential infrastructure for how we'll be understood and represented in the future.