With A.I., though, programmers ascend to an even higher level of abstraction. They describe, in regular language, what the program should do, and the agents translate that idea — that human intent — into code. Writing software no longer means mentally juggling the nuances of a language like Python, say, or JavaScript or Rust. Coding no longer involves messing up an algorithm and then trying to figure out where your error lies. That part, too, has been abstracted away. (View Highlight)
Given A.I.’s penchant to hallucinate, it might seem reckless to let agents push code out into the real world. But software developers point out that coding has a unique quality: They can tether their A.I.s to reality, because they can demand the agents test the code to see if it runs correctly. “I feel like programmers have it easy,” says Simon Willison, a tech entrepreneur and an influential blogger about how to code using A.I. “If you’re a lawyer, you’re screwed, right?” There’s no way to automatically check a legal brief written by A.I. for hallucinations — other than face total humiliation in court. (View Highlight)
I’ve written about developers for decades, and they have always rhapsodized about the thrill of bringing a machine to life through arcane commands. Sure, the work could be cosmically exasperating, requiring hours or even weeks to chase down a single bug. But the grind sharpened the joy. When things finally started working, the burst of satisfaction was intoxicating. (View Highlight)
So I was surprised by how many software developers told me they were happy to no longer write code by hand. Most said they still feel the jolt of success, even with A.I. writing the lines. “I love programming. I love getting in the zone. I love thinking big thoughts. It’s the creative act,” says Kent Beck, a longtime guru of the software industry who has been coding since 1972. Ten years ago, he mostly stopped writing software; he was frustrated with the latest languages and software tools. But L.L.M.s got him going again, and he’s now cranking out more projects than ever: a personalized note-taking app, new types of databases. Even the fact that A.I.’s output can be unpredictable — if you ask it to write a piece of code, it might do so in a slightly different way each time — “is addictive, in a slot-machine way.” (View Highlight)
A few programmers did say that they lamented the demise of hand-crafting their work. “I believe that it can be fun and fulfilling and engaging, and having the computer do it for you strips you of that,” one Apple engineer told me. (He asked to remain unnamed so he wouldn’t get in trouble for criticizing Apple’s embrace of A.I.) He went on: “I didn’t do it to make a lot of money and to excel in the career ladder. I did it because it’s my passion. I don’t want to outsource that passion.” He also worries that A.I. is atomizing the work force. In the past, if developers were stuck on an intractable bug, they asked colleagues for advice; today they just ask the agents. But only a few people at Apple openly share his dimmer views, he said. (View Highlight)
The coders who still actively avoid A.I. may be in the minority, but their opposition is intense. Some dislike how much energy it takes to train and deploy the models, and others object to how they were trained by tech firms pillaging copyrighted works. There is suspicion that the sheer speed of A.I.’s output means firms will wind up with mountains of flabbily written code that won’t perform well. The tech bosses might use agents as a cudgel: Don’t get uppity at work — we could replace you with a bot. And critics think it is a terrible idea for developers to become reliant on A.I. produced by a small coterie of tech giants. (View Highlight)
Thomas Ptacek, a Chicago-based developer and a co-founder of the tech firm Fly.io, has seen the lacerating fights between the developers who love A.I. and those few who hate it, and “it’s a civil war,” he told me. He’s in the middle. He thinks the refuseniks are deluding themselves when they claim that A.I. doesn’t work well and that it can’t work well. “It’s like being gaslit,” he says. The holdouts are in the minority, and “you can watch the five stages of grief playing out.” (View Highlight)
But there’s evidence that A.I. is now eroding entry-level coding jobs. Last year, Erik Brynjolfsson, an economist who directs the Stanford Digital Economy Lab, and his colleagues analyzed industries based on their age group and how easily their jobs could be done by A.I. He found that computer programmers had one of the most “A.I.-exposed” jobs — and junior developers were hit the hardest. The number of jobs for those between the ages 22 and 25 (when one is most likely to be entering the field) had declined by 16 percent since 2022, while older programmers saw no significant decrease. (View Highlight)
Several developers suggested, in fact, that the number of software jobs might actually grow. An untold number of small firms around the country would love to have their own custom-made software, but were never big enough to hire, say, a five-person programmer team necessary to produce it. But if you could hire a single A.I.-assisted coder to do that same work, or even a part-time one? This is, as Brynjolfsson notes, a version of the “Jevons paradox”: When something gets cheaper to do, we don’t just pocket the savings — we do more of it. Though it could also be that these software jobs won’t pay as well as in the past, because, of course, the jobs aren’t as hard as they used to be. Acquiring the skills isn’t as challenging. (View Highlight)
How things will shake out for professional coders themselves isn’t yet clear. But their mix of exhilaration and anxiety may be a preview for workers in other fields. Anywhere a job involves language and information, this new combination of skills — part rhetoric, part systems thinking, part skepticism about a bot’s output — may become the fabric of white-collar work. Skills that seemed the most technical and forbidding can turn out to be the ones most easily automated. Social and imaginative ones come to the fore. We will produce fewer first drafts and do more judging, while perhaps feeling uneasy about how well we can still judge. Abstraction may be coming for us all. (View Highlight)
Clive Thompson interviewed more than 70 software developers at Google, Amazon, Microsoft and small start-ups. He is the author of “Coders: The Making of a New Tribe and the Remaking of the World.” (View Highlight)
I recently visited Ebert, a machine-learning engineer and former neuroscientist, at the spare apartment where he and Conor Brennan-Burke run their start-up, Hyperspell. Ebert, a tall and short-bearded 39-year-old with the air of a European academic, sat before a mammoth curved monitor. Onscreen, Claude Code — the A.I. tool from Anthropic — was busy at work. One of its agents was writing a new feature and another was testing it; a third supervised everything, like a virtual taskmaster. After a few minutes, Claude flashed: “Implementation complete!” (View Highlight)
Ebert grew up in the ’90s, learning to code the old-fashioned way: He typed it out, line by painstaking line. After college, he held jobs as a software developer in Silicon Valley for companies like Airbnb before becoming a co-founder of four start-ups. Back then, developing software meant spending days hunched over his keyboard, pondering gnarly details, trying to avoid mistakes.
All that ended last fall. A.I. had become so good at writing code that Ebert, initially cautious, began letting it do more and more. Now Claude Code does the bulk of it. The agents are so fast — and generally so accurate — that when a customer recently needed Hyperspell to write some new code, it took only half an hour. In the before times? “That alone would have taken me a day,” he said. (View Highlight)
He and Brennan-Burke, who is 32, are still software developers, but like most of their peers now, they only rarely write code. Instead, they spend their days talking to the A.I., describing in plain English what they want from it and responding to the A.I.’s “plan” for what it will do. Then they turn the agents loose.
A.I. being A.I., things occasionally go haywire. Sometimes when Claude misbehaves and fails to test the code, Ebert scolds the agent: Claude, you really do have to run all the tests. (View Highlight)
I looked at Ebert’s prompt file. It included a prompt telling the agents that any new code had to pass every single test before it got pushed into Hyperspell’s real-world product. One such test for Python code, called a pytest, had its own specific prompt that caught my eye: “Pushing code that fails pytest is unacceptable and embarrassing.” (View Highlight)
Embarrassing? Did that actually help, I wondered, telling the A.I. not to “embarrass” you? Ebert grinned sheepishly. He couldn’t prove it, but prompts like that seem to have slightly improved Claude’s performance.
His experience is not unusual; many software developers these days berate their A.I. agents, plead with them, shout important commands in uppercase — or repeat the same command multiple times, like a hypnotist — and discover that the A.I. now seems to be slightly more obedient. Such melodramatic prose might seem kind of nuts, but as their name implies, large language models are language machines. “Embarrassing” probably imparted a sense of urgency. (View Highlight)
Brennan-Burke chimed in: “You remember seeing the research that showed the more rude you were to models, the better they performed?” They chuckled. Computer programming has been through many changes in its 80-year history. But this may be the strangest one yet: It is now becoming a conversation, a back-and-forth talk fest between software developers and their bots. (View Highlight)
This vertiginous shift threatens to stir up some huge economic consequences. For decades, coding was considered such wizardry that if you were halfway competent you could expect to enjoy lifetime employment. If you were exceptional at it (and lucky), you got rich. Silicon Valley panjandrums spent the 2010s lecturing American workers in dying industries that they needed to “learn to code.”
Now coding itself is being automated. To outsiders, what programmers are facing can seem richly deserved, and even funny: American white-collar workers have long fretted that Silicon Valley might one day use A.I. to automate their jobs, but look who got hit first! Indeed, coding is perhaps the first form of very expensive industrialized human labor that A.I. can actually replace. A.I.-generated videos look janky, artificial photos surreal; law briefs can be riddled with career-ending howlers. But A.I.-generated code? If it passes its tests and works, it’s worth as much as what humans get paid $200,000 or more a year to compose. (View Highlight)
You might imagine this would unsettle and demoralize programmers. Some of them, certainly. But I spoke to scores of developers this past fall and winter, and most were weirdly jazzed about their new powers.
“We’re talking 10 to 20 — to even 100 — times as productive as I’ve ever been in my career,” Steve Yegge, a veteran coder who built his own tool for running swarms of coding agents, told me. “It’s like we’ve been walking our whole lives,” he says, but now they have been given a ride, “and it’s fast as [expletive].” Like many of his peers, though, Yegge can’t quite figure out what it means for the future of his profession. For decades, being a software developer meant mastering coding languages, but now a language technology itself is upending the very nature of the job. (View Highlight)
The enthusiasm of software developers for generative A.I. stands in stark contrast to how other Americans feel about the impact of large language models. Polls show a majority are neutral or skeptical; creatives are often enraged. But if coders are more upbeat, it’s because their encounters with A.I. are diametrically opposite to what’s happening in many other occupations, says Anil Dash, a friend of mine who is a longtime programmer and tech executive. “The reason that tech generally — and coders in particular — see L.L.M.s differently than everyone else is that in the creative disciplines, L.L.M.s take away the most soulful human parts of the work and leave the drudgery to you,” Dash says. “And in coding, L.L.M.s take away the drudgery and leave the human, soulful parts to you.” (View Highlight)
Coding has been drudgery, historically. In the movies, programmers excitedly crank out code at typing speed. In reality, writing software has always been an agonizingly slow and frustrating affair. You write a few lines of code, a single “function” that does one little thing, and then discover that you made some niggling error, like leaving out a single colon. As a company’s “codebase” — every line of code in its software, accreting over the years — gets larger and involves dozens or thousands of functions interacting with one another, you could spend hours, days or weeks pulling your hair out trying to find which little mistakes are bringing everything to a halt. Maybe a line of yours broke something your colleague is coding two cubicles over. (View Highlight)
For decades, computer engineers tried to automate this drudgery. In the industry, they call every step in this direction “adding a layer of abstraction”: If you often find yourself doing something step by step in an onerous fashion, you automate it.
For example, one early computer language was Assembly, and it was devilishly hard to write. Computers had very little memory, so coders had to be efficient in how they used it, putting each bit of data carefully in place and then keeping mental track of it. Even simple calculations required an incremental, meticulous approach. Say you wanted to write some code that would calculate how much you’d have if you got 5 percent interest on 10,000over10years.Backinthe1960s,thatwouldhaverequiredperhapsninelinesofprettyobtuseAssembly:“VAL,FLDECML10000.0”tosetthestartingamountat10,000, “CLA VAL” to load the amount into the processor, “FAD ZERO” to tell the computer you’re working with numbers that have decimal points; and so on. (View Highlight)
By the ’80s and ’90s, as computers became more powerful, engineers were able to create languages that took care of all that memory management for you, and also turned common asks into simple commands. In Python, a coder can perform that exact same calculation very simply: “total_amount = 10000 * (1.05 ** 10).” That single line tells the computer to multiply 10,000 by the interest rate over 10 years and store the result in the variable labeled “total_amount.” Programmers no longer need to think about where all the data is being stored in the computer’s memory; Python does that for them. It is, in other words, a layer of abstraction on top of all that fiddly memory business. Writing in that language is delightfully easier. (View Highlight)
uring the 2000s and 2010s, programmers abstracted away more and more scut work. Virtually anytime they encountered an onerous task, they wrote some code to automate it and then — very often — made it open source, giving it away for others to use. Here’s an example: As a hobbyist programmer, I sometimes want to automatically “scrape” the text from a website. I’ve never written code myself to do that; I just use Beautiful Soup, a freely available package of thousands of lines of Python code that manages all the complexity. I don’t even need to understand how Beautiful Soup works. It just gives me simple, typically one-line Python commands that — whoosh — retrieve and analyze website text for me. A significant amount of software is produced in precisely this way: developers stitching together big piles of code that someone else wrote. (View Highlight)
So what exactly is left? Or as Boris Cherny, the head of Claude Code, put it when we met at Anthropic’s headquarters in January: “What is computation — what is coding?” Then he added, “You can get pretty philosophical pretty fast.”
His answer echoed what I’ve heard from pretty much every developer I’ve spoken to: A coder is now more like an architect than a construction worker. Developers using A.I. focus on the overall shape of the software, how its features and facets work together. Because the agents can produce functioning code so quickly, their human overseers can experiment, trying things out to see what works and discarding what doesn’t. Several programmers told me they felt a bit like Steve Jobs, who famously had his staffers churn out prototypes so he could handle lots of them and settle on what felt right. The work of a developer is now more judging than creating. (View Highlight)
Cherny himself has been through all the layers of abstraction: As a teenager in California, he taught himself a little Assembly so he could write a program that solved math homework automatically on his calculator. Today he simply pulls out his phone and dictates to Claude what he wants the A.I. agent to do; in a sort of Ouroboric loop, 100 percent of Cherny’s contributions to the Claude codebase are now written entirely by Claude.
While we talked, his phone was sitting on the table in front of us, and at the end of an hour he showed me the screen: 10 Claude agents had been tweaking the codebase. “I haven’t written a single line by hand, and I’m like the most prolific coder on the team,” he said. “It’s an alien intelligence that we’re learning to work with.” (View Highlight)
For most of the coders I met, learning to work with A.I. means learning to talk to A.I. This struck me as an unexpected paradox of this new age, because traditionally coding was a haven for introverts who preferred to talk as little as possible to others at work. But now their entire job involves constantly chatting with this alien life form.
If describing and talking are now much of the work of a software developer, the talk nonetheless remains pretty complex and highly technical. An amateur can’t do it. You can’t just tell an agent, Build me the code for a successful start-up. The agents work best when they’re being asked to perform one step at a time; ask for too much and they can lose the plot. Aayush Naik, whose start-up in San Francisco uses Claude Code, says it’s a delusion to imagine that your A.I. agent will generate a whole project at once, in a “Big Bang” moment. Yes, you can get it to write 5,000 lines of code — but then, he says, “you test it and nothing works.” This, all the software developers say, is where their training and expertise are still needed: knowing how a big codebase ought to be structured, how to design the system so it’s reliable and how to figure out if the agent is sloppy. (View Highlight)
Even with this occasional backtracking, Claude codes so much faster than Yanovsky that he struggles to put a number on how much faster he can now get his work done. “Like, 20X?” he offered. What once took weeks now takes hours. Every Silicon Valley founder he knows is experiencing the same thing. If you want to build a company in a hurry, nobody does it by hand anymore. (View Highlight)
The fact that A.I. can boost coder productivity so drastically has been one of the more remarkable talking points in the field. I’ve noticed this myself: Just last week, I needed a web tool to clean up some messy transcripts, and I used A.I. to build it in about 10 minutes. On my own, it would have taken an hour, possibly longer.
But software start-ups — or individuals like me who are vibe-coding their own small apps — are a special case. They involve what’s known in the industry as “greenfield” coding, where there are no pre-existing lines of code to deal with. An entirely new codebase is being created from scratch. (View Highlight)
A vast majority of software developers aren’t working in greenfield contexts. They’re “brownfield,” employed by mature companies, where the code was written years (or decades) earlier and already reaches millions or billions of lines. Rapidly adding new functions is usually a terrible idea — they might accidentally conflict with another part of the code and break something that millions of customers rely on. At most mature software firms, coders historically spent a minority of their time — sometimes barely more than an hour per day — actually writing code. The rest was planning, hashing out priorities and meeting to discuss progress. (View Highlight)
This is the curse of success, and why big, established software firms can be slower to deliver upgrades than younger companies. Before a coder’s new work is released, colleagues and higher-ups typically put it through a “code review,” looking carefully at its lines and the results of any testing. If you want to put a number on how much more productive A.I. is making the programmers at mature tech firms like Google, it’s 10 percent, Sundar Pichai, Google’s chief executive, has said. (View Highlight)
That’s the bump that Google has seen in “engineering velocity” — how much faster its more than 100,000 software developers are able to work. And that 10 percent is the average inside the company, Ryan Salva, a senior director of product at the company, told me. Some work, like writing a simple test, is now tens of times faster. Major changes are slower. At the start-ups whose founders I spoke to, closer to 100 percent of their code is being written by A.I., but at Google it is not quite 50 percent. (View Highlight)
I visited Salva in Sunnyvale, Calif., to shoulder-surf as he showed me how L.L.M.s have been woven into Google’s work flow. For a firm with billions of lines of code, he noted, A.I.’s value isn’t necessarily in writing new code so much as in figuring out what’s going on with the existing lines. Developers will use it to analyze and explain what “sprawling” portions of code are doing, so they can determine how to help improve or alter it.
“A.I. is much better at wading into an unfamiliar part of the codebase, making sense of what’s happening,” he told me. It also helps developers work in languages they might not be very familiar with. As a result, developers on Salva’s team form smaller groups: A year ago, these might have needed 30 people, each with their own specialty. Now a group needs only three to six people, which enables them to move more nimbly, so “we’re able to clear through a lot more of our backlog,” Salva said. (View Highlight)
Salva opened up his code editor — essentially a word processor for writing code — to show me what it’s like to work alongside Gemini, Google’s L.L.M. For the first few years of the A.I. boom, he said, it was still “very much what I would describe as ‘human in the loop.’” The A.I. assisted but didn’t work independently. While he typed away, Gemini analyzed a piece of code for him, explaining whether it had been fully tested or not. When it suggested a few new lines, it was up to him to accept them. (View Highlight)
“As an engineer, I care less that the models are really good at producing the right result the first time,” he said. “I care much more that there are validation steps in place so that it eventually gets the perfect or the right answer.”
A 10 percent increase in Google’s “velocity” may seem underwhelming, Salva noted, given the hoopla around A.I. “We have collectively — both in the software industry as well as in the media — oh, my God, created a hype cycle,” he had told me when we first talked, last summer in New York. But the reality was impressive enough for him. “We should be delighted when there’s 10 percent efficiency gains for the entire company. That’s freaking bonkers!” (View Highlight)
These digital renovations have sped up, too. McLaren Stanley, a senior principal engineer at Amazon, recently modernized a piece of code he had personally written years earlier. The original version had taken a month to create; this time, with the help of Amazon’s in-house A.I., he finished the job in a morning. His team has similarly reworked other big chunks of code. One of A.I.’s key advantages, Stanley told me, is that it makes it easier to try out new ideas. “Things I’ve always wanted to do now only take a six-minute conversation and a ‘Go do that,’” he says. (View Highlight)
Silicon Valley has already been through a huge wave of layoffs. During the 2010s, tech firms were hiring aggressively, competing for new grads and adding an average of 74,000 new employees a year, according to the Bureau of Labor Statistics. Job postings soared in the early years of the pandemic. Then firms abruptly reversed course, and postings for new jobs collapsed. More than 700,000 tech workers have been laid off in the last four years, according to Roger Lee at Layoffs.fyi (this number includes all jobs in tech).
Most tech observers say A.I. probably wasn’t the cause of those layoffs because, at the time, it wasn’t yet good enough to replace coders. Other factors, they figure, were more significant: Interest rates rose, so tech firms lost their easy growth money. Companies that overhired shed that excess capacity. Some also suspect that when Elon Musk bought Twitter and said he laid off 80 percent of his work force, tech executives at other firms took note and decided that maybe they didn’t need so many engineers either. (View Highlight)
Virtually all of the tech executives I’ve spoken to, from those at coastal giants to those at small regional firms, have sworn to me that A.I. would not stop them from hiring appealing new talent. It’s true that A.I. makes their existing developers more productive, but they always need more done.
“In my many years at Google, we have always been constrained by having way, way, way more ideas of things we would like to do than there was time and energy and hours in the day to go do them,” Jen Fitzpatrick, the company’s senior vice president for Google Core Systems & Experiences, told me. “I have never met a team at Google who says, ‘You know, I’m out of good ideas.’ The answer is always, ‘The list of things I would like to do is nine miles longer than what we can pull off.’” (View Highlight)
This question of skills can lead in some unsettling directions, though, when you chase it down. Many midcareer coders told me they felt confident using A.I. because they had spent decades developing a strong sense of what good, efficient code looks like. That allows them to explain to the agents precisely what they want and lets them spot quickly when the agents have cranked out something inefficient or sloppy.
But what happens to the next generation? Will they still develop that intuitive sense for code? If your job is now less about writing than assessing, how will newbies learn to assess? (View Highlight)
Some new developers told me they can feel their skills weakening. Pia Torain is a software engineer for Point Health A.I., and she was only two years into her job when, in the summer of 2024, the company told her to start using Github’s Copilot code-writing tool. “I realized that it was just four months that I was prompting hundreds, 500 prompts a day, that I started to lose my ability to code,” she says. She stopped using them for a while; these days, she’ll have A.I. write for her, but she carefully reads the output, making sure she’s absorbing how the code works. “If you don’t use it,” Torain told me, “you’re going to lose it.” (View Highlight)
Point Health co-founder Rachel Gollub is less worried. She has been a software developer for almost 40 years, and for decades coders have worried that the craft is imminently doomed. When languages like Python and JavaScript emerged, they abstracted away the need to think about memory management, so developers stopped needing those skills. The old-school coders caterwauled: It’s not real coding unless you’re managing your own memory! (View Highlight)
“People were all like, ‘You’re losing all your ability to code,’” Gollub told me. But plenty of big, reliable companies — Dropbox, say — relied heavily on newer languages like Python, and they have worked fine. Memory management is crucial in only a subset of coding tasks today, such as with devices that don’t have much computing power. The vast majority of the software industry has moved on. Gollub expects the same transition will happen as A.I. tools become the norm.
Writing code is now so highly abstracted that nearly anyone could crack open a L.L.M. and describe an app. Maybe not a complex one. But if they needed some simple software for personal use? An A.I. could likely craft it. (View Highlight)
This is the cultural side effect of coding becoming conversational: The realms of programmers and everyday people, separated for decades by an ocean of arcane know-how, are drifting closer together. If code-writing A.I. continues to improve, there will likely be far more people in Cuisy’s situation — the Jevons paradox in action. “Maybe they don’t label themselves as software engineers, but they’re creating code,” Brynjolfsson says. “A lot of people have ideas.” The world becomes flooded with far more software than ever before — written by individuals, for individuals. (View Highlight)