rw-book-cover

Metadata

Highlights

  • Our society has only recently come to terms with the fact that maybe placating young children by getting them hooked on touchscreen devices with unfettered access to the internet was bad for their brains. (View Highlight)
  • Now, with the rise of human-like AI chatbots, a generation of “iPad babies” could seem almost quaint: some parents are now encouraging their kids to talk with AI models, sometimes for hours on end, The Guardian reports. Others are using the soothing voice of a chatbot to put their kids to bed — all evidence of how this experimental technology has become a part of many people’s daily lives, despite major questions about how the tech affects our mental health. (View Highlight)
  • one tired father named Josh admitted that he gave his preschooler his phone to let him talk to ChatGPT’s Voice Mode. “After listening to my four-year-old son regale me with the adventures of Thomas the Tank Engine for 45 minutes I tapped out,” he wrote, “so I opened ChatGPT.” In an interview with The Guardian, Josh said he needed to do chores and thought his son “would finish the story and the phone would turn off.” (View Highlight)
  • AI chatbots have been implicated in the suicides of several teenagers, while a wave of reports detail how even grown adults have become so entranced by their interactions with sycophantic AI interlocutors that they develop severe delusions and suffer breaks with reality — sometimes with deadly consequences. (View Highlight)
  • These episodes have sparked a wave of alarm and scientific inquiry into how extensive conversations with large language models that are designed to please you and keep you talking could be affecting our brains. They add to already longstanding concerns about the tech’s unreliable and easily circumventable safeguards, with chatbots frequently being caught giving dangerous advice to young users, like how to self-harm, or encouraging suicide. (View Highlight)
  • Yet toymakers like Mattel are rushing to shove AI into their products, while AI companion platforms are pushing kid or teenage-friendly personalities, thereby whitewashing the tech’s image by packaging it into something innocuous. (View Highlight)
  • These are extreme examples of how the tech can go wrong. But with young and impressionable minds, the effects can be subtler and no less worrying. Some research has shown that children view AI chatbots as existing somewhere between animate and inanimate beings, professor at the Harvard Graduate School of Education professor Ying Xu told The Guardian —  but there’s a meaningful difference between a kid imagining personalities for their dolls and toys and having a conversation with a chatbot. (View Highlight)
  • “A very important indicator of a child anthropomorphizing AI is that they believe AI is having agency,” Xu told the newspaper. “If they believe that AI has agency, they might understand it as the AI wanting to talk to them or choosing to talk to them. They feel that the AI is responding to their messages, and especially emotional disclosures, in ways that are similar to how a human responds.” (View Highlight)
  • Some parents also used AI to generate images, replacing another means of how children explore the world by dreaming up fantasies in their head and expressing those by doodling on paper. Ben Kreiter, a father of three in Michigan, told The Guardian that after introducing his kids to ChatGPT’s image generating tools, they started asking to use it every day. John, a father of two from Boston, said he used Google’s AI to conjure up a mash-up of a fire truck and a monster truck after his big-rig obsessed four-year old boy asked if a “monster-fire-truck” existed. (View Highlight)