rw-book-cover

Metadata

Highlights

  • The latest appears to be Amazon — though one can debate whether it’s taking away the right lessons. On Tuesday, the Financial Times reports, the ecommerce giant summoned a large group of engineers to a meeting addressing recent outages plaguing its online retail business, some of them related to AI coding tools. (View Highlight)
  • In a meeting briefing note, the company described the “trend of incidents” as characterized by a “high blast radius” and “Gen-AI assisted changes.” As a “contributing factor,” the note listed “novel GenAI usage for which best practices and safeguards are not yet fully established.” (View Highlight)
  • “Folks, as you likely know, the availability of the site and related infrastructure has not been good recently,” Dave Treadwell, a senior vice-president at Amazon’s eCommerce Services, told employees in an email, per the FT. (View Highlight)
  • The meeting follows a nearly six hour outage last week that took down Amazon’s shopping website and app, leaving customers unable to make orders. In the aftermath, the company blamed a botched “software code deployment.” (View Highlight)
  • In another series of incidents at its cloud computing division, Amazon Web Services, two separate outages were caused after engineers allowed the company’s in-house AI coding tool to make disastrous changes, additional FT reporting revealed last month. In one case, the AI tool deleted and recreated the entire coding environment. (View Highlight)
  • In response to the earlier reporting, Amazon framed these blunders as an issue related to its protocols around AI usage and “user access control,” rather than an AI autonomy issue, and it appears to be sticking to its guns. The company will not be backing away from deploying AI but is instead insisting on stronger guardrails and more oversight on how it’s used. (View Highlight)
  • There’s no question that AI tools, if they should be used at all, should be closely supervised, especially in programming roles. Like any generative AI model, AI coding tools frequently allow errors through and sometimes struggle to follow instructions, meaning they can take actions that a user never intended. (View Highlight)
  • But Amazon’s renewed focus on implementing more human oversight comes as it’s fired hundreds of workers from its cloud computing division and as it targets laying off 30,000 employees across its corporate workforce overall. Meanwhile, management leans on programmers to heavily use AI tools, with employees previously telling the FT that the company set a target for 80 percent of developers to use AI for coding tasks at least once a week. (View Highlight)
  • In sum: more coding with more AI with more human oversight, but fewer humans. We’ll see how that works out. (View Highlight)