• Like any technological revolution, AI is putting a premium on a new set of skills. Only this time, the skills might be best acquired in a writing workshop or a philosophy seminar. (View Highlight)
  • the somewhat paradoxical idea that, thanks to the AI revolution, we are entering a period where it will be a great time to be a humanities major with an interest in technology. (View Highlight)
  • The simple fact of the matter is that interacting with the most significant technology of our time—language models like GPT-4 and Gemini—is far closer to interacting with a human, compared to how we have historically interacted with machines. If you want the model to do something, you just tell it what you want it to do, in clear, persuasive prose. People who have command of clear and persuasive prose have a competitive advantage right now in the tech sector, or really in any sector that is starting to embrace AI. Communication skills have always been an asset, of course, but thanks to language models they are now a technical asset, like knowing C++ or understanding how to maintain a rack of servers. (View Highlight)
  • This is, of course, a variation on Andrej Karpathy’s quip from more than a year ago: “The hottest new programming language is English.” But it’s more than that, I think. The core skills are not just about straight prompt engineering; they’re not just about figuring out the most efficient wording to get the model to do what you want. (View Highlight)
  • What is the most responsible behavior to cultivate in the model, and how do we best deploy this technology in the real world to maximize its positive impact? What new forms of intelligence or creativity can we detect in these strange entities? How do we endow them with a moral compass, or steer them away from bias and inaccurate stereotypes? Can language alone generate a robust theory of how the world works, or do you need more explicit rules or additional sensory information? (View Highlight)
  • All of those questions have been absolutely central to the discussion of AI for the past two years, but if you think about it, they were all questions that belonged to the humanities until the language models came along: ethics, philosophy of language, political theory, history of innovation, and so on. (View Highlight)
  • I don’t want to carry this argument too far. Some of my training as a writer has come in useful in creating NotebookLM, through the design of our core prompts and the overall “voice” of the product. But Notebook itself would not exist without the exceptional engineering talent of our extended team, from our front-end and back-end programmers to the people who built the enormously complex infrastructure of the models themselves. Perhaps someday it will be possible for a code-illiterate person like myself to conjure an entire application into being just by describing the feature set to a language model, but we are not there yet. And of course building the models themselves will almost certainly continue to require skills that are best honed in engineering and computer science programs, not writing seminars. (View Highlight)
  • But I do think it is undeniable that the rise of AI has ushered humanities-based skills into the very center of the tech world right now. In his last product introduction before his death, Steve Jobs talked about Apple residing at the intersection of the liberal arts and technology; he literally showed an image of street signs marking that crossroads. But the truth is back then most of the travelers on the liberal arts avenue were designers. There wasn’t as much need for philosophers or ethicists or even writers in building the advanced consumer technology of that era. But now those skills have a new relevance. (View Highlight)
  • I recommend watching the video starting around the thirty-minute mark where we really dive into the exercise — I think it’s probably the best example to date of the kind of high-level creative and conceptual work that NotebookLM makes possible, where the software is truly helping you make new connections and synthesize information far more easily than would have been possible before. (View Highlight)
  • But the thing I also want to draw your attention to is how much Dan is driving the process, by suggesting a series of prompts that ultimately elicit some astonishing—even to me—results from NotebookLM. I was going into it more or less planning on showcasing NotebookLM’s ability to extract and organize a complex string of facts out of a disorganized collection of source material, like creating a timeline of all the events associated with the fire, or suggesting key passages that I could read to understand the impact of the fire. (All with our new inline citations, which are pretty amazing in their own right.) But at a certain point, Dan really takes the wheel, and says, effectively: “This is a Steven Johnson project, and so it’s got to have some surprising scientific or technological connection that the reader/viewer wouldn’t expect; let’s ask NotebookLM to help us find that angle.” And then we just go on a run—again, largely driven by Dan’s prompting—that takes us to some pretty amazing places, and even generates the opening lines of a script by the end of it. (View Highlight)
  • What you can see in this sequence are two things: 1) an remarkably capable language model doing things with a large corpus of source material that would have been unthinkable just a year ago really—but just as importantly 2) a very smart human being who knows how to probe the source information and unlock the skills of the language model to generate the most useful and interesting results. The skill that Dan displays here is basically all about being able to think through this problem: Given this body of knowledge, given the abilities and limitations of the AI, and given my goals, what is the most effective question or instruction that I can propose right now? I don’t know whether you’re better off with a humanities background or an engineering background in developing that talent, but I do believe it has become an enormously valuable talent to have. (View Highlight)
  • The other thing worth noting in the exchange—and I take a step back to reflect on it in the middle of the exercise—is the range of intelligences involved in the project. On the one hand you have the intelligence of all the astronauts and flight directors contained in the interview transcripts themselves; you have the intelligence of all the authors whose quotes I have gathered over the past two decades of research and reading; you have the intelligence of two humans who are asking questions and steering the model’s attention towards different collections of sources, crafting prompts to generate the most compelling insights; and then you have the model itself, with its own alien intelligence able somehow to take our instructions and extract just the right information (and explain its reasoning) out of millions of words of text. I used to describe my early collaborations with semantic software as being like a duet between human and machine. But these kinds of intellectual adventures feel like a chorus. (View Highlight)
  • This idea of the model not as a replacement for human intelligence, but instead tool for synthesizing or connecting human intelligence seems to be gathering steam right now, which is good to see. The artist Holly Herndon made a persuasive case for calling artificial intelligence “collective intelligence” in a recent conversation with Ezra Klein. My friend Alison Gopnik has been talking about AI for a long while as a “cultural technology,” which adds weight to the prediction that humanities skills will have increasing relevance in a world shaped by such technologies. In a recent conversation with Melanie Mitchell in the LA Review of Books, Alison argued: (View Highlight)
  • A very common trope is to treat LLMs as if they were intelligent agents going out in the world and doing things. That’s just a category mistake. A much better way of thinking about them is as a technology that allows humans to access information from many other humans and use that information to make decisions. We have been doing this for as long as we’ve been human. Language itself you could think of as a means that allows this. So are writing and the internet. These are all ways that we get information from other people. Similarly, LLMs give us a very effective way of accessing information from other humans. Rather than go out, explore the world, and draw conclusions, as humans do, LLMs statistically summarize the information humans put onto the web. (View Highlight)
  • This isn’t a debunking along the lines of “AI doesn’t really matter.” In many ways, having a new cultural technology like print has had a much greater impact than having a new agent, like a new person, in the world. (View Highlight)
  • Another way to put that—which I will probably adapt into a longer piece one of these days—is that language models are not intelligent in the ways that even small children are intelligent, but they are already superhuman at tasks like summarization, translation (both linguistic and conceptual), and association. And when you apply those skills to artfully curated source material written by equally, but differently, gifted humans, magic can happen. (View Highlight)
  • (View Highlight)
  • (View Highlight)