rw-book-cover

Metadata

Highlights

  • Right now, millions of engineers are using AI to do their job. “Top engineers at Anthropic, OpenAI say AI now writes 100% of their code,” says Fortune. Claude is now effectively writing itself, says the person building Claude.1 “When AI writes almost all code, what happens to software engineering?,” asks a software engineer. This is all a very well known phenomenon at this point.2 (View Highlight)
  • By contrast, data analysts, who also write a lot of code, are not using AI to do their jobs. Though most use chat applications like ChatGPT, a 2025 survey from dbt Labs found that less than a third are using dedicated development tools. Things may have changed since that survey—it’s from early 2025, which was years ago these days—but by most accounts, AI seems to be upending analysts’ lives much less than it’s upending engineers’. (View Highlight)
  • Then, people quickly realized that AI is good at writing code. Initially, most AI-powered coding products, like Github Copilot or Cursor, were fundamentally about asking for permission: They proposed changes in code editors, and engineers were asked if they wanted to accept or reject the updates. Simply accepting all of the model’s edits was a fairly obvious idea, but that made people nervous. So most tools didn’t encourage it, until Anthropic said, eh, why not?,3 and released a fully autonomous coding app. Practically overnight, Claude Code became one of the most influential products in the world, and Anthropic became one of the most valuable companies in the world. (View Highlight)
  • At its core, Claude Code is a bunch of looped requests to Claude. A user says “add a button to my website;” that is turned into a prompt to Claude; Claude’s response is fed back into another Claude; and again; and again; and so on. But why stop there, many people wondered. Could you have a manager Claude tell the first Claude to add a button to the website? Could you have a director Claude tell the manager Claude what problem it needs to solve, and have the manager Claude decide to add a button on its own? Could you have a CEO Claude tell the director Claude to hit their quarterly targets? Could you have a board of Claude tell the CEO Claude to sharpen their pencil?4 Which is all to say, Gas Town—i.e., an army of Claudes, telling each other what to do—was a fairly obvious idea. Still, most people didn’t try to build it—not in its unhinged, explosive form, anyway—because it sounds dangerous and expensive. But then, someone did, and it got a bunch of attention, because it was unhinged and explosive. (View Highlight)
  • Of course, if a bunch of Claudes are good at managing our software projects, maybe they’d be good at managing our personal lives? Our lives aren’t that complicated; they’re just scattered. They’re in our personal emails, and our work emails, and texts, and calendars, and in our documents, and our bank statements, and our forgotten Banana Republic Rewards Credit Card accounts. Giving Claude access to all of these things and telling it to be a personal assistant is a fairly obvious idea, but it’s a horrifying one. So, most companies that tried to build personal AI assistants did so “responsibly,” by carefully gating what the assistant could see and do. And then an engineer said, eh, why not?, and yippee-ki-yayyed together Clawdbot, an AI assistant with access to absolutely everything. It became, in a month, the world’s sixth-most popular open source software project.5 (View Highlight)
  • Look, this is a responsible blog that believes in doing responsible things. It believes that it is correct for AI data products to focus on delivering “trusted insights on your enterprise data.” It believes that, “as AI agents evolve from experimental sidekicks to productive team members,” of course “enterprise leaders must design systems that are not only powerful but trusted, governed, and simple to use.” It believes that if the world were right and just, the product “that helps data teams deploy analytics agents they can trust” would be the product that earns everyone’s business. We should be rigorous. We should measure twice and cut once. We should be data stewards, and master data managers. We should not pursue the fairly obvious—and obviously irresponsible—idea of giving an AI agent unfettered access to our databases, our documents, our emails, our Slack messages, our Zoom calls, our meeting notes, and our customer support messages, and telling it, “Go find me something useful, and don’t come back until you do.” We should not launch a hundred Claude Code sessions and instruct them all to chase whatever hunches they have about how we could make more money. We should not have Codex test a new hypothesis every three seconds, until one finds a billion-dollar needle in a haystack. (View Highlight)
  • But someone will. Someone will make a product that does that. And given this environment—and our recent history—which product are you betting on? The slow and steady one that carefully audits its structured context stores and tells users it doesn’t have enough information to answer their question? Or the one that cranks the AI dial to 12? Will it be the product that worries itself with governance and keeping inference costs low, or the one that believes that a dollar spent on Opus is probably a lot more productive than dollar spent on an analyst, and tries to ignite a data center on fire on your behalf?6 Is it the AI agent that’s optimized to oh-so-precisely answer mundane questions like, “How many shirts did we sell last week?” over and over again via a Slack integration? Or is a battalion of Codexes and Claudes that are all told to relentlessly and recklessly find ways to make more money? (View Highlight)
  • You could have two theories about this:
    1. Analysts do a job that is uniquely hard for AI. We’ve talked about this theory a lot. Software projects are relatively contained—there is a codebase; there are users who give feedback on what that codebase does; there can be specifications for how you want to update that codebase to improve it; all of these things can be written down. Software is also relatively testable—change the code; push the new button; does it work? Data analysis is neither of these things. To solve an analytical problem, you have to know about a codebase, but also a business, a market, the thoughts inside of people’s heads, and the location of nearby electrical substations. You cannot write all of this down. Moreover, analysis isn’t testable. You find out if your recommendation was good after the recommendation plays itself out.
    2. Or, analysts are cowards. (View Highlight)
  • These days, people spend a lot of time talking about the future of software. From an earlier post, here’s one way you could think about it:
    1. Before we all had computers and phones and Instagram, making art was hard. You had to have a fancy camera, or painting skills, or the ability to stitch together film strips into a video. Because art was expensive and somewhat scarce, we valued the art itself.

    2. Then it became easy to make. You can create great art in seconds, sometimes without even meaning to. And as the cost of making it fell, the value and notoriety of each individual piece of art fell too.

    3. So we started to care more about the creators than their specific creations. Like: Name that one great Kai Cenat stream. What’s your favorite Mr. Beast video? What’s Charli D’Amelio’s masterpiece? Some things might be more memorable than others, but there is no opus. Very little stands on its own. Popularity comes from a personality and an amorphous body of work. (View Highlight)

  • Now, the cost of creating software is also going to zero, as they say. So would we not expect to see the same patterns here? While that doesn’t mean big software businesses will go away—there will always be workhorse products that do accounting and manage warehouses and fly airplanes, just as there are still big-budget Hollywood movies—could there not also be an ecosystem of influencers who make software that is popular because they made it? … (View Highlight)