In Plato’s Phaedrus, written around 370BC, we find a clear statement of the fear that new-fangled inventions such as reading and writing will have catastrophic effects on human memory. The fear was that these innovations would lead to lazy minds. We would start to think that we knew more than we really did, thanks to these cheap but superficial new means of non-biological storage and retrieval. Those fears seem laughable now. (View Highlight)
As human-AI collaborations become the norm, we should remind ourselves that it is our basic nature to build hybrid thinking systems – ones that fluidly incorporate non-biological resour- ces. Recognizing this invites us to change the way we think about both the threats and pro- mises of the coming age. (View Highlight)
, it seems like the world is full of techno-gloom. There is a fear that new technologies are making us stupid. GPS apps are shrinking our hippocampus (or in other ways eroding our unaided navigational abilities); easy online search is making us think we know (unaided) more than we do; multitasking with streaming media is driving down our native attention span and (possibly) affecting grey matter density in anterior cingulate cortex (View Highlight)
Empirical studies have shown that the use of online search can lead people to judge that they knowmore ‘in the biological brain’ than they actually do, and can make people over-estimate how well they would perform under technologically unaided quiz conditions (View Highlight)
As our tools and technologies have progressed, we have been able to probe ever further and deeper into the mysteries of life and matter.We have come to understandmuch about the likely conditions at the very start of time and unlocked the biochemical foundations of life. We have not achieved this by becoming dumber and dumber brains but by becoming smarter and smarter hybrid thinking systems. (View Highlight)
When easy search, location services, constant access, and gen- erative AI-based toolkits come together, they weave a complex web whose implications for humanity are far from clear. On the one hand, these webs empower us to do more and more, and more easily, than perhaps ever before. On the other, many fear theymay rob us ofmore elusive goods. We might become content-curators rather than crea- tors, and the passive targets of click-rate maximizing algorithms that donot haveour best interests at heart. (View Highlight)
Don’t these developments threaten to rob us not just of our time and attention but the very joy of creation too? (View Highlight)
Reaching just beyond the body, we exploit the richworld offamiliar old-school static media. Even today, many of us will resort to pen and paper, scribbling madly aswe try to think through a probleminmath, life, or philosophy. More prosaically, we may sketch and re-sketch the design of a new kitchen on a paper napkin. These familiar loops through external media slowly become part and parcel of how many of us think. I (View Highlight)
But in thinking about the effects of all our new tools and technologies, we may often be starting from entirely the wrong place. The misguided starting point is an image of ourselves as (cognitively speaking) nothing but our own biological brains. An alternative vision, one that I have long championed, is that we humans are and always have been, what New York University phi- losopher David Chalmers and I call ‘extended minds’– hybrid thinking systems defined (and constantly re-defined) across a rich mosaic of resources only some of which are housed in the biological brain7 (View Highlight)
That study alerts us to the potential role of AI in cementing certain tools, views, and methodologies in place, thus impeding the emergence of alternative approaches – much as, to borrow the authors’ metaphor, an agricultural monoculture improves efficiency while making the crop more vulnerable to pests and diseases (View Highlight)
Thisis because human brains are amazingly adept at dovetailing their own native skills to the new opportunities made available by wave upon wave of tools and technologies. This results not in simple ‘offloadings’ ofwork so much as the creation ofdelicately interwoven new wholes - brain, body, world tapestries in which what the brain does, what the bodydoes, andwhat the loops via external media andapps provide are all in continuous flux, each adapting (in its own way, and at its own timescales) to what the rest has to offer. (View Highlight)
the impact of AIs on the cultural variation and transmission of ideas is now being studied as a topic in its own right, already revealing a mixed pattern of good and damaging effects20. Such studies should help suggest targeted means ofmitigating the bad and curating the good, by building infrastructure and legislation designed to offset our weaknesses and (as the authors neatly put it) keep human needs and human society firmly in the loop. (View Highlight)
According to this emerging picture, we learn about our worlds by constantly trying to predict the sensory con- sequences of our own actions14,15.Aswe dosobrainslikeours become expert at resolving key uncertainties by taking different actions in the world. Such actions might include using a stick to probe the depth ofa river – but they may also include firing up an online resource to help resolve some other kind of uncertainty15,16. When the world around us presents enriched suites of opportunities, brains like that learn to do the most efficient thing – for example, storing in bio-memory only the stuff (search cues, for example) that is needed to get the right results, as and when needed, from the larger ecologies in which they are situated. (View Highlight)
. If the best uncertainty- minimizing suite of actions involves a bit of internal brainwork and a bit of bodily work (such as tapping some keys on a laptop) that is the sequence that gets chosen. The brain itself is unconcerned about where and how things get done. What it specializes in is learning how to use embodied action to make the most of our (now mostly human- built) worlds17. (View Highlight)
Still, there remains an intuitive difference between ‘extending my ownmind’ through the useofpen, paper, and carefully-deployed apps, and (for example) simply asking someone else to solve the problem. In this regard, some of the most striking recent innovations – such as the use of ChatGPT and other generative AIs - might seem like rather bad candidates for extending human thought and reason. Instead ofacting as mind-extending technologies, the fear is that thesemay act asmind- replacing technologies. (View Highlight)
as we get more used to working and acting within these kinds of (increasingly perso- nalized) ‘cognitive ecosystems’, we will start to treat such suggested opinions in rather the way we might treat a thought that suddenly occurs to us during a conversation on some new topic. We treat the new thought as in some broad sense belonging to me8.But – just as with that sudden thought- we would also want to explore whether it really makes sense, and whether or not we are, all things considered, happy to endorse it. (View Highlight)
Learning how to both trust and question our best AI-based resources in this way is one of the most important skills that our evolving educational systems will now need to install. New tools that help in assessing the outputs of standard LLMs will also play a role in that process. (View Highlight)
a study of human Go players revealed increasing novelty in human-generated moves following the emer- gence of ‘superhuman AI Go strategies’18. Importantly, that novelty did not consist merely in repeating the innovative moves discovered by the AIs. Instead, it seems as if the AI-moves helped human players see beyond centuries of received wisdom so as to begin to explore hitherto neglected (indeed, invisible) corners ofGo playing space. The same will be true, I conjecture, in domains ranging from art andmusic to architecture and medical science. Instead of replacing human thought, the AIs will become part of the process of culturally evolving cognition. (View Highlight)
much of human intelligence involves what are often seen as metacognitive skills – skills of knowing what to rely upon and when. These skills are crucial in cases where we have to decide towhat extent to rely on various forms of ‘cognitive offloading’ rather than brain-based means of storage and recall24.
Such skill sets now need to be expanded to include, for example, the assessment of suggestions made by off-brain resources such as ChatGPT. We now need to become experts at estimating the likely reliability of a response given both the subject matter and our level of skill at orchestrating a series of prompts. We must also learn to adjust our levels of trust according to our choice of portals and wrap- arounds. (View Highlight)
Nested in such a web, human creativity need not be dampened. Instead, it can flourish, continuously targeting new tasks and horizons. All that alters (just as it has always altered) will be some of the specific tasks andprocessing that humanbrains perform.These alterations will reflect what can safely be devolved entirely to the new digital uncon- scious, what remains apt for storage in biological memory, and what must emerge through new kinds of epistemically well-regulated interaction. (View Highlight)
Let’sagree that using GPS systems froman early age downgrades some our brain- bound skills at unaided wayfaring, and that believing that something you type will be stored on-device makes you more likely to forget the informationthan ifyou aretoldthat after youtypeitthe recordwill be erased. Let’s agree that brain-based recall of a good search string (enabling easy future search) is now often prioritized over on-board storage of the target information itself22. Such consequences are intrinsically bad (representing real losses) only if you start off by identifying yourmind and selfwith the capacities and activities ofyour bare biological brain. (View Highlight)
we need to prioritize (and perhaps legislate for) technologies that enable safe synergistic collaborations with our new suites of intelligent and semi-intelligent resources. As individuals, we need to become better estimators of what to trust and when. That means educating ourselves in new ways, learning how to get the best from our generative AIs, and fostering the core meta-skills (aided and abetted by the use ofnew personalized tools) that help sort the digital wheat from the chaff. (View Highlight)
But what if you were already best understood as an extended mind, a bio-technologically distributed self? From that perspective, what such results display need not be shrinkage and loss so much as the careful husbanding ofour own on-board cognitive capital23.Inthat case, the real worries are more practical ones. Perhaps the online storageor GPS signal is fragileor corrupt? Perhaps the information you retrieve is likely to be false or misleading? Perhaps bio-storage would actually, for some specific instance, be cheaper, better, or more reli- able than the alternatives? (View Highlight)
As part of this process, deep and abiding concerns for ‘extended cognitive hygiene’ will need tobe instilled inus fromquite an early age. Some of this is happening already. Younger generations are savvier than ever about privacy, phishing, andwhat to share online. We should now apply equally demanding standards to everything that we might be tempted uncritically to incorporate into our new digitally extended minds. That means developing and applying a rich epistemology (theory of knowledge) – one better suited to the unique sets of opportunities and challenges that confront our bio-technological hybrid minds26. (View Highlight)