rw-book-cover

Metadata

  • Author: Dan Rockmore
  • Full Title: What It’s Like to Brainstorm with a Bot | The New Yorker

Highlights

  • At the frontiers of knowledge, researchers are discovering that A.I. doesn’t just take prompts—it gives them, too, sparking new forms of creativity and collaboration. (View Highlight)
  • We sometimes forget that the machine is less oracle than broad interlocutor. (View Highlight)
  • “Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.” That’s from Plato’s Phaedrus, where Socrates presents, with sympathy, the case against the treacherous technology of writing. It could have been written yesterday, as a warning against gen A.I., by any number of my own colleagues. (View Highlight)
  • The academy evolves slowly—perhaps because the basic equipment of its workers, the brain, hasn’t changed much since we first took up the activity of learning. Our work is to push around those ill-defined things called “ideas,” hoping to reach a clearer understanding of something, anythin (View Highlight)
  • ocrates’ worries reflect an entrenched suspicion of new ways of knowing. He was hardly the last scholar to think his generation’s method was the right one. For him, real thinking happened only through live conversation; memory and dialogue were everything. Writing, he thought, would undermine all that: it would “cause forgetfulness” and, worse, sever words from their speaker, impeding genuine understanding. Later, the Church voiced similar fears about the printing press. In both cases, you have to wonder whether skepticism was fuelled by lurking worries about job security. (View Highlight)
  • We don’t have to look far, in our own age of distraction and misinformation, to see that Socrates’ warnings weren’t entirely off the mark. But he also overlooked some rather large benefits. Writing—helped along by a bit of ancient materials science—launched the first information age. Clay tablets were the original hard drives, and over time writing more than earned its keep: not just as a tool for education and the development of ideas but (to address what Socrates might really have been worried about) as a tremendous engine for employment in the knowledge economy of its day, and for centuries after. For all that, writing never did supplant dialogue; we still bat around ideas out loud. We just have more ideas to talk about. Writing was, and remains, the original accelerator for thought. (View Highlight)
  • Still, for all its creative utility, writing is not much of a conversational partner. However imperfectly, it captures what’s in the writer’s head—Socrates called it a reminder, not a true replication—without adding anything new. Large language models (L.L.M.s), on the other hand, often do just that. They have their own pluses and minuses, and the negatives have received plenty of airtime. But Luke’s story, and those of a growing cohort of “next-gen” professors (Luke was recently tenured), reveal what’s genuinely novel: these new generative-A.I. tools aren’t just turbocharged search engines or glorified writing assistants. They’re collaborators. (View Highlight)
  • As machines insinuate themselves further into our thinking—taking up more cognitive slack, performing more of the mental heavy lifting—we keep running into the awkward question of how much of what they do is really ours. Writing, for instance, externalizes memory. Our back-and-forths with a chatbot, in turn, exteriorize our private, internal dialogues, which some consider constitutive of thought itself. (View Highlight)
  • And yet the reflex is often to wave away anything a machine produces as dull, mechanical, or unoriginal, even when it’s useful—sometimes especially when it’s useful. You get the sense that this is less about what machines can do than about a certain self-protectiveness. Hence the constant, anxious redrawing of the boundaries between human and machine intelligence. These moving goalposts aren’t always set by careful argument; more often, they’re a kind of existential staking of territory. The prospect of machine sentience hangs over all of this like a cloud. (View Highlight)
  • ith the L.L.M., I can ask “dumb questions” in private. I encourage my students to do the same—not so they’ll stay out of my office but so that, when they come, their time with me is better spent. I do it when I’m stretching into a new field or collaborating with friends in areas they know much better than I do. The L.L.M. softens my self-consciousness and makes the ensuing conversations richer and more fun. (View Highlight)
  • This style of research—wandering around, then zeroing in—is a version of the ancient fox-hedgehog distinction made famous by Isaiah Berlin. (Archilochus: “The fox knows many things, but the hedgehog knows one big thing.”) In the exploratory phase, I’m the fox, sniffing around in books, conversations, half-baked theories of my own. Then the hedgehog takes over. The L.L.M. amplifies both modes: it makes me a wider-ranging fox and a quicker, more incisive hedgehog. (View Highlight)
  • Jeremy’s tinkerbot gives me hope. To what extent are my scattered thoughts like his code fragments—half-finished, abandoned, waiting for rescue? Could a machine revive a box of my broken or discarded ideas, turning them into something that the wider world would find useful and interesting? And if a machine, furnished with a carefully written set of instructions and seeded with the world’s stockpile of realized ideas, could begin generating new ones, would we still insist that true originality belongs only to people? Some cling to the belief that new ideas are conjured from the ineffable depths of the human spirit, but I’m not so sure. Ideas have to come from somewhere, and, for both humans and machines, those somewheres are often the words and images we’ve absorbed. (View Highlight)
  • I’m reminded of the Grimms’ fairy tale “The Elves and the Shoemaker.” A poor but gifted shoemaker is barely keeping his business afloat. He has the talent, but not enough time or resources. Enter a band of cheerful, industrious elves who work through the night, quietly finishing his designs. With the elves in the background, the shoemaker and his wife build a thriving business. They might have simply let the good times roll, but instead, in a gesture of thanks, the shoemaker’s wife—a deft seamstress herself—makes the elves a set of fine clothes, and the elves happily move on. The shoemaker and his wife continue, now on surer footing. No doubt they even learned a thing or two about their craft by observing the elves at work. Maybe they later expanded their shop to produce jerkins and satchels. I like to imagine those elves making the rounds, boosting the fortunes of craftspeople everywhere. “The Elves and the Shoemaker” is one of the few Grimms’ tales where everyone leaves happy. (View Highlight)
  • Is there a future where we simply lay out the thought-leather, rough and unfinished, set the machine going, and return to admire—and take credit for—the handiwork? The shoemaker always had talent; what he and his wife lacked was the means to turn it into a living. The elves didn’t put them out of work; they propelled them to a higher level, allowing them to make custom shoes efficiently, profitably, and cheerfully. (View Highlight)
  • Most of the time, I see our digital assistants as those helpful elves. I’m not naïve about the risks. You can imagine a WALL-E scenario of academia’s future: scholars lounging in comfort, feeding stray ideas to machines and then sitting back to read the output. Though every new tool offers the promise of an easier path, when it comes to creativity, vigilance is required; we can’t let the machine’s product become the unquestioned standard. I bet that even those elves made some shoes that had to be put in the seconds pile. Research, writing, and, above all, thinking have always meant more than simply producing an answer. (View Highlight)
  • As the physicist Richard Feynman once said, “The prize is the pleasure of finding the thing out.” That’s what keeps a lot of us going. (View Highlight)
  • These days, we’re in an uneasy middle ground, caught between shaping a new technology and being reshaped by it. The old guard, often reluctantly, is learning to work with it—or at least to work around it—while the new guard adapts almost effortlessly, folding it into daily practice. Before long, these tools will be part of nearly everyone’s creative tool kit. They’ll make it easier to generate new ideas, and, inevitably, will start producing their own. They will, for better or worse, become part of the landscape in which our ideas take shape. (View Highlight)