Summer is mostly gone, and I’m revisiting some drafts I hadn’t yet published. Back in June, I attended a meeting here in Gijón with two great talks: one by Miguel Fernández, CTO of Omnia, a fascinating startup from Asturias tackling SEO in the age of agents, and another by my friend Fabien Girardin, who shared insights about an AI-powered future and what it could mean for a region like Asturias.
After the talks, I spent time chatting with Fabien, who introduced me to Every, a software company that publishes solid material on technology. There I read a couple of pieces by Dan Shipper, including this article on LLMs as summarizers, which I enjoyed. His view aligns with older technical debates about LLMs as compressors.
Both mental models help explain one of the core capabilities of LLMs. As the author notes, summarizing is one of the main intellectual activities I find myself doing throughout the day. We compress information constantly: when onboarding new team members, presenting to senior management, or trying to influence a decision in a meeting or over email.
Question
Is summarizing alone enough to justify the AI hype?
Probably not for a full revolution. Still, it’s a foundational skill in the knowledge economy, and we can add the promise of AI agents coordinating specific tasks.
Reading these pieces made me think of Wikipedia as one of the greatest summarizers of our time and how it relates to LLMs. Its impact on the knowledge economy has been huge, but we never hyped it nearly as much. Wikipedia condenses complex information into digestible summaries, offering a reliable map for anyone seeking a broad overview of a topic.
Like LLMs, Wikipedia sometimes gets things wrong. If you’re not an expert, those mistakes are hard to spot. In Wikipedia, you might explore the editor discussions. But most of us rarely go that deep—we usually lack the foundation to judge fine details, or the use case doesn’t justify that level of effort. What we really need is a broad map to better understand the problem at hand. With LLMs, you might want to dig into how the network activates when answering a question, especially if the topic involves compliance issues.
This tradeoff between detail and usability extends beyond Wikipedia. Summarizing or compressing information means losing resolution and nuance. That’s no different from what scientific models do: they simplify reality into something we can use. You can’t demand they be as accurate as reality, if they were, they’d be useless. When hiking, a map is invaluable, even if folding it is a hassle. A full 3D model of the terrain would be more faithful but far less practical to carry. We use models and simplifications all the time, and one benefit of AI reaching a general audience is that it sparks discussion about these deeper topics.
LLMs work the same way: compressing information to make it digestible. Like any model, they’re useful if we understand what they’re good for and how to use them. That reminds me of the analogy in Statistical Rethinking, where models are described as Golems. You have to learn how the Golem, or the statistical model, works and be intentional about how you ask it to act, otherwise it might tear the whole town down. The same goes for AI models: if we want them to be effective in our work, we need to guide them carefully rather than blindly delegate and risk serious mistakes.
The real challenge isn’t whether AI can summarize, it’s whether we can learn to read the maps it gives us wisely.