Ascribing & Ignoring: how not to think about AI
Unhelpful ways to think about AI belong to one of two categories: when we ascribe ‘human’ traits that aren’t there; and when we wilfully or unconsciously ignore the various humans, commercial interests, and other stuff behind the curtain.
Let’s call the first trap ‘Anthropomagical thinking’.
This one’s for everyone ‘brainstorming’ with a ‘co-pilot’ that tells us what it’s ‘thinking’. Has it offered up an ‘idea’, or just a plausible string of text (however good that text sounds in Scarlett Johansson’s stolen voice)? We have to hold onto a sense of the (astonishing, empowering) mechanisms at work here: probabilistic computation at extraordinary speed and scale.
But the anthropomagical also includes anyone using a tool that has absorbed ‘all human knowledge’. However vast, training data remains just that: tokenisable information; partial, biased, and shorn of human context.
Perversely, the second big trap involves ignoring the actual humanity involved. Let’s call it our ‘mental green curtain’.
It’s all to easy to think and talk about AI as technology in the abstract. In reality, we’re as often as not talking about freemium and subscription services provided by a handful of Western tech companies.
Abstract tech feels at once democratic, a birthright and - like nuclear power - one of those things we can’t control or legislate around. But, to paraphrase Shannon Vallor, AI is ‘people all the way down’. It’s real humans designing, testing, maintaining and profiting from those services; and it’s real humans moderating the content used to improve, or generated by, them. Allowing technology to stay unregulated is not some physical inevitability, but a choice real humans make.
We need to remember that AI, like the internet, is ultimately a physical reality somewhere. One that’s guzzling energy and water, as it happens. With real humans (here’s looking at you, Nvidia!) making the most of selling shovels in a goldrush (other precious metals are, for now, available).