rw-book-cover

Metadata

Highlights

  • In the years between the two World Wars, France built The Maginot Line - a line of fortifications stretching along its eastern border. The Maginot Line was an unshakable defence; a masterpiece of military design comprising underground bunkers and armored gun turrets, with rail lines connecting garrisons. It was built to prevent the kind of trench warfare that had devastated Europe barely a generation earlier. (View Highlight)
  • Because in May 1940, German forces launched a blitzkrieg, completely avoiding the fortified zones along the Maginot Line, and moving in through the lightly defended Ardennes Forest, which the French had considered impassable. The Maginot Line stood tall through the attack. Unconquered, but also utterly useless. The Maginot Line had succeeded as engineering solution - it was impenetrable. But it had failed as a strategic response to a reconfigured system of warfare. The perfect answer to the wrong question. (View Highlight)
  • AI won’t take your job but someone using AI will. It’s the kind of line you could drop in a LinkedIn post, or worse still, in a conference panel, and get immediate Zombie nods of agreement. Technically, it’s true. But, like the Maginot Line, it’s also utterly useless! It doesn’t clarify anything. Which job? Does this apply to all jobs? And what type of AI? What will the someone using AI do differently apart from just using AI? What form of usage will matter vs not? (View Highlight)
  • In fact, it gives you just enough conceptual clarity to stop asking the harder questions that really matter: • How does AI change the structure of work? • How does it restructure workflows? • How does it alter the very logic by which organizations function? • And, eventually, what do future jobs look like in that new reconfigured system? (View Highlight)
  • The problem with ‘AI won’t take your job but someone using AI will’ isn’t that it’s just a harmless simplification. The real issue is that it’s a framing error. It directs your attention to the wrong level of the problem, while creating consensus theatre. It directs your attention to the individual task level - automation vs augmentation of the tasks you perform - when the real shift is happening at the level of the entire system of work. (View Highlight)
  • (View Highlight)
  • Let’s unpack this in more detail by looking at 8 fallacies that are inherent to this statement and its interpretation of the future of work. Here’s a summary of the 8 fallacies that explain why this rather simplistic meme-worthy line might be true in one frame, but utterly useless in reality: Fallacy #1: The automation vs augmentation fallacy Fallacy #2: The productivity gains fallacy Fallacy #3: The static jobs fallacy Fallacy #4: The ‘me vs someone-using-AI’ competition fallacy Fallacy #5: The workflow continuity fallacy Fallacy #6: The neutral tools fallacy Fallacy #7: The stable salary fallacy Fallacy #8: The stable firm fallacy (View Highlight)
  • The entire idea of AI won’t take your job but someone using AI will is based on the fallacy of task-based thinking. The assumption of task-based thinking is that AI will affect your job in one of two ways: automation or augmentation. Automation is AI performing the task for you. Augmentation is you performing the task better with AI. If you want to avoid the first, be prepared to take up the latter. AI (automation) won’t take your job, but someone using AI (augmentation) will. (View Highlight)
  • What it misses is that in most systems, the value of the task itself is about to change. When the system evolves, tasks that were once critical may no longer be relevant. Not because they are done poorly, but because they no longer create any advantage. Think of the impact of containerization on ports. The arrival of the intermodal shipping container did not simply automate loading and unloading of ships. It restructured the economic logic of trade, making some ports like Liverpool irrelevant and changing the fortunes of others like Singapore. A dockworker who might have worried about cranes taking his job suddenly realised that the entire port had lost out. What really played out was much larger than just port automation. It was a new logic of which ports made sense in the new system and which ones didn’t. (View Highlight)
  • The automation vs augmentation fallacy leaves us stuck analysing our jobs when the entire system around it is changing (as we will explore further through the other fallacies below). AI doesn’t simply replace tasks. It reshapes the architecture of the system in which those tasks used to make sense. (View Highlight)
  • AI is doing to knowledge systems what the container did to logistics and what blitzkrieg did to warfare: it’s shifting the architecture of advantage. And yet, the dominant response remains task-oriented. (View Highlight)
  • If a tool helps you do more work in less time, that sounds like progress. That’s only true if the system of work - the workflow, the organization, the business model - remain stable. Ironically, in the midst of structural uncertainty, productivity gains are often redistributed in unexpected ways. And in some cases, they produce the opposite of their intended effect. (View Highlight)
  • Companies that adopt AI for task acceleration will soon realize (if they haven’t already) that when tools are widely available and easily replicated, productivity becomes a commodity. And in a commoditized environment, surplus doesn’t flow to the worker, it flows to the coordinating layer that determines the logic of the system. As more value shifts to coordinating work, the workers using these tools become even more interchangeable and commoditized. (View Highlight)
  • Economists have long understood this through the lens of Ricardian rents. In any system, value accrues not to the most productive participant, but to the one controlling the scarcest complementary asset. In agriculture, it was land. In tech, it’s often those who control coordination or distribution, typically by using data strategically to allocate work or to match supply with demand. (View Highlight)
  • most jobs are not fixed units. They are artifacts of organizational design, built around coordination problems. A job exists because a system needs someone to manage a specific set of interactions, decisions, or dependencies. When the system changes, the logic of the job can collapse completely, even if the individual tasks remain intact. The fallacy of static jobs persists because it’s cognitively efficient. It offers a clear anchor in a shifting environment. Job titles serve as focal points. They make organizational complexity manageable. But it’s grossly misleading. It encourages workers to optimize for role continuity (you doing your job vs someone using AI doing your job) when they should be preparing for role redefinition. Will my job be automated?” is the wrong question. “Will this role still exist in the new system?” is the question that really matters. (View Highlight)
  • When word processors first came out, typists started believing in the equivalent of “Word processors won’t take your job, but someone using a word processor will.” And that was true, but not at all in the way that they had expected. Typists had assumed reskilling would solve the problem. But the real problem was that the core constraint - expensive document editing - had been removed. Document editing was suddenly cheap. And with that, the basis of competition was no longer relevant. The main skill for which they were paid was suddenly irrelevant. Typing became embedded across all workflows. From a specialized task requiring specialized skills, it became a basic task that everyone could perform. The typists weren’t outcompeted by better typists. They were displaced by a new system design in which typing no longer justified a full-time role. (View Highlight)
  • This is the hallmark of a frame shift: the logic of competition itself is restructured. You don’t lose because you were replaced by the new technology or even by someone using the new technology. Both automation and augmentation frames become irrelevant. You lose because the environment stopped rewarding the thing you were all along racing to perfect. This happens whenever the value of a skill collapses. (View Highlight)
  • Understanding the constraint that makes you valuable is key. In the case of typists, it was the high cost of document editing. When that constraint is removed, so is your value. Conversely, as you apply your skills to new workflows that emerge, new constraints can also prevent you from getting the benefits of those skills. (View Highlight)
  • The someone-using-AI fallacy suggests that your job in some form (or at least your skills today) will remain relevant because the overall workflow will remain stable. Automation would have taken your place in the workflow. Augmentation helps you retain it. This is the fallacy of workflow continuity: the belief that the steps will remain the same, even as the system evolves. This fallacy ignores the possibility that in many cases, workflows get reimagined. And when they do, the new workflows may never require the skills you bring to the table, whether you use AI alongside or not. Beyond individuals, even firms fall for this fallacy, and focus on applying AI to. optimize their old processes into irrelevance. They invest in tools that reduce headcount per workflow stage, but miss the larger opportunity: to restructure the system entirely. (View Highlight)
  • Microsoft Excel is a great example. People who mastered Excel, inevitably began to hold disproportionate influence through the 1990s, a period when process optimization was the rage. Decisions that previously were made using gut instinct now were now modeled and simulated. This shifted organizational gravity toward those who could control the cells on Excel. They held decision rights in the organization. Any tool that shifts the locus of decision support ends up doing this. (View Highlight)
  • Langdon Winner’s famous question, Do artifacts have politics?, was aimed precisely at this kind of illusion. When New York’s parkway overpasses were designed too low for buses to pass under, the result was a de facto exclusion of low-income residents (many of whom relied on public transit) from accessing certain beaches. Was that political? Winner argued yes, because the design encoded a social consequence. (View Highlight)
  • As AI gets adopted, it affects organizational power in two ways. The first is by shifting decisions. AI can be framed as an assistant, a co-pilot, a helper. It’s sold as augmentation, not reallocation. But it often restructures how decisions are made, and by whom. In any organization, different groups compete for control over key decisions. When a new tool changes who gets to inform and make decisions, it shifts the internal power map. And many jobs lose power through this process. The second is by shifting execution. Again, AI is somewhat unique here. Unlike most other technologies which are primarily assistive, AI is also agentic, in that it can make choices and allocate resources towards achieving a specific goal. As a result, AI may work alongside you (augmentation) but still take over important parts of the workflow execution from you. Gradually, your position in certain teams becomes weaker as agentic execution substitutes more of what you provided to some of these teams. (View Highlight)
  • More work doesn’t always mean more pay. In fact, very often, it means the opposite. Alongside the rise of digital music streaming, the volume of music production has exploded. And more tracks mean more recording sessions, which should mean more work for sessions musicians. Yet, the economics don’t follow. Streaming brought down revenue per play. Music was already an industry with power law outcomes, those royalty structures skewed further, favouring a few headline artists. (View Highlight)
  • So yes, people are listening to more music than ever and the experts on LinkedIn are talking about Jevons Paradox, but none of that money flows back to the sessions musicians. People assume that as long as their role remains intact, their relevance, income, and career trajectory are safe. The problem, often, is not so much that the job disappears as much as the fact that its value goes down. (View Highlight)
  • The link between higher expertise and higher pay had been decoupled. We see the same effects play out with AI where the link between higher expertise and higher pay breaks down. But in this case, it is an outcome of tool augmentation. In general, tools that augment average skilled workers to perform at par with high skilled workers have a flattening effect. Expertise and pay get decoupled. This problem is further exacerbated with AI because of the learning advantage of AI. The more you use AI, the more you train it to become capable of doing things that you get paid to do today. As AI becomes more capable, your own job fragments further and what remains of it may increasingly not justify the pay you used to command. This is a case of augmentation (someone using AI) leading to an adverse outcome where you continue to retain the job but no longer command the skill premium. (View Highlight)
  • And organizations, eager to avoid disruption, often reinforce this illusion by keeping roles in place even as they start changing what those roles mean and how much they get paid. (View Highlight)
  • Most companies today talk about technology adoption in the language of addition. AI will be bolted on top. An integration here, a pilot somewhere else. With little thought given to the strategy, the structure, or the organizational workflows. This is the fallacy that intelligent firms - firms looking to use intelligence in the existing frame - typically make. Same as before, just more intelligent. This framing misunderstands the nature of architectural change. It assumes the organization is a container that can absorb transformation without being reshaped by it. (View Highlight)
  • They treat AI like a feature upgrade, instead of a new operating system. In reality, when a technology changes how decisions are made and how coordination happens, it reorganizes the firm. This is the final fallacy. The fallacy of the stable firm is appealing because it promises transformation without re-organization. It suggests you can keep your structure, your culture, your business model, and simply inject a new capability. (View Highlight)
  • In many ways, a useless truth is worse than something that’s clearly false. It lacks nuance and is dangerously easy to misinterpret. Once repeated enough, a useless truth becomes a mechanism for easy consensus - that sea of nodding heads at a panel discussion. Think of the last time you talked about AI and someone responded with “AI won’t take your job but someone using AI will.” Because its meaning is vague, people gravitate toward the most obvious interpretation. That if you start using AI, you’ll ensure that the ‘someone using AI’ who takes your job is actually you. (View Highlight)
  • The danger of a useless truth is this form of false closure. It lets people feel they’ve solved something important, when in fact, it locks them into - what is most likely - error-prone execution. These useless truths create what we might call passive consensus, particularly in organizations that face pressure to act amid uncertainty. Everyone agrees, no one disagrees. It preserves ambiguity while preventing thoughtful design of the right solutions. (View Highlight)