As a prosaic personal example, I’m happy to offload navigational skills to my phone, but I hate it when my phone starts auto-suggesting answers to people’s messages. I don’t really want to offload my social cognition to a computer – I’d rather engage in real communication from my mind to another person’s.
Precisely. The question is, what tasks are so dangerous, dull, demeaning or repetitive that we’re delighted to outsource them, and what do we feel are important to be done ourselves or by other humans? If I was going to be judged in a trial, I don’t necessarily want an algorithm to pass a verdict on me, even if the algorithm is demonstrably very fair, because there’s something about the human solidarity of people in society standing in judgement of other people. At work, I might prefer to have a relationship with human colleagues – to talk to and explain myself to other people – rather than just getting the work done more efficiently.
Technology may have evolved with us, but it’s not alive. Yet many of the latest technologies, especially artificial intelligences, can appear to act like they have a mind, tricking us into recognising some kind of sentience. You describe this as the “anthropomorphic delusion”. What is it? And why is it dangerous?
There’s a double danger to anthropomorphism. The first is that we treat machines like people, and project personalities, intentions and thoughts onto artificial intelligences. Although these systems are extraordinarily sophisticated, they don’t possess anything like the human sense. And it’s very dangerous to act as though they do. For a start, they don’t have a consistent worldview; they are miraculously brilliant forms of autocomplete, working on pattern recognition, working on prediction. This is very powerful, but they tend to hallucinate and make up details that don’t exist, and they will often contain various forms of bias or exclusion based upon a particular training set. But an AI can respond fast and plausibly to anything, and as human beings, we are very predisposed to equate speed and plausibility with truth. And that’s a very dangerous thing.
You may also like:
Similarly, we might overlook the very large corporations that lie behind these entities, who have their own agendas, their own modes of profit, their own issues around privacy, and so on. So anthropomorphism gets in the way of something really important, which is the well-informed, critically engaged process of debating what these systems are, what they can do for us, what their risks are, and how we should deploy and regulate them
The other danger of anthropomorphising technology is that it can lead us to think of and treat ourselves like we’re machines. But we are nothing like large language models: we are emotional creatures with minds and bodies who are deeply influenced by our physical environment, by our bodily health and well-being. Perhaps most importantly, we shouldn’t see [a machine’s] efficiency as a model for human thriving. We don’t want to optimise ourselves with perfectible components, within some vast consequentialist system. The idea that humans can have dignity and autonomy and potential is very ill-served by the desire to optimise, maximise and perfect ourselves.
Tom Chatfield’s book Wise Animals: How Technology Made Us Who We Are, is published by Picador.
*David Robson is an award-winning science writer. His next book is The Laws of Connection: 13 Social Strategies That Will Transform Your Life, to be published by Canongate (UK) and Pegasus Books (USA & Canada) in June 2024. He is @d_a_robson on X and @davidarobson on Instagram and Threads.
—
If you liked this story, sign up for The Essential List newsletter – a handpicked selection of features, videos and can’t-miss news delivered to your inbox every Friday.
Join one million Future fans by liking us on Facebook, or follow us on Twitter or Instagram.
