Here comes the science-fiction part
The first of a lot of stuff about intelligent machines in science-fiction books and movies.
What sci-fi can tell us about the near-future is such a vast area and I can’t wait to dive in. For now, I just want to be a magpie and pick out a couple of shiny observations. The first is about monkeys and crows, the second is about religion.
The analogy that most readily comes to mind when considering machines and predictive text is the Infinite Monkey theory (popularised if not conceived, according to some not very scientific digging, by Emile Borel and Arthur Eddington - but ultimately owing most of its popularity I suspect to Douglas Adams then Brian Cox - not that one - et al). To recap: give enough monkeys enough time and typewriters and the works of Shakespeare must result at some point. It’s really a sort of joke about probability and infinity, but predictive generators throw the analogy into sharp relief. Remix the entire internet etc. enough times and you’ll get plausible answers to your questions (now) and something new and extraordinary (soon).
Happily, one of sci-fi author Adrian Tchaikovsky’s novels includes a far more potent metaphor for machine learning. Tchaikovsky imagines the supercharged evolution of a species of corvid - ultimately resulting in a kind of general intelligence (artificial - or actual intelligence? There’s the rub). He divides the business of processing into two distinct roles: memory and pattern recognition for one half of the collective brain; prediction and theorising for the other. When these combine - in a pair of birds like his characters Gothi and Gethli - you have interactions with a strong echo of ‘talking to’ an AI.
The mimicry of intelligence is uncanny; the question of true sentience fraught. But it’s useful, whenever we’re tempted to anthropomorphise, to think of a pair of chattering crows as often as we picture a human - or a roomful of monkeys. Helpful too to acknowledge all the human experience, and labour, that shaped those crows - whatever we make of its/their status as creatures.
And to round off, a second quick observation about sentience and limitation. We can get there by way of Tchaikovsky or China Mieville, since both authors feature human disciples of powerful AIs. So much doomsday speculation seems to bypass the role of people altogether, pitching all of humanity in a struggle against some rogue - and invariably sentient - machine. What seems rather more likely is that dependence on, and reverence for, the machines we create will recruit adherents and extremists - with no need, as an interim step, for sentience on the machine’s part, or losing the ultimate ‘physical’ ability to control them, on ours.
Four starters.
Manifesto time. Let’s go easy on the ethics and sentience conversations, and take a closer look at the edges of what’s possible - physically, culturally and behaviourally with AI.
This is such a vast field I wanted to take some time to thrash out the big themes that I want to focus on first.
But let’s begin by ruling out some of the ideas and topics that can dominate the space. First up, ethics. Huge issue; but not a discussion/s I want to tackle here. (Frankly, since we can’t as a species seem to get to grips with impending extinction, and we already have billionaires deciding not just which voices to amplify but which nation states get to use their satellites in a war, the notion that we’ll somehow avoid all the pitfalls feels faintly ludicrous. AI isn’t going anywhere, and will accelerate everything good and bad about homo sapiens.) I’m chucking the whole IP/ownership-authorship debate in this bucket too, covering that bucket with a cloth and leaving it in the corner.
Second, sentience. Seriously, who cares? Not just because the line we’ll cross is so remote or so fine. And not just because none of the doomsday scenarios require sentient ‘machines’ to become a reality. The reality is that sentience is a matter of perception and preference. Do you want your copilot / Copilot / new life partner to seem self-aware; or do you want to banish any hint of personality? Pay your money, take your choice, knock yourself out. Some court somewhere will one day decide whether your entity deserves recognition on some scale between humans, forests, and lobsters; but that verdict won’t change much about anyone’s lived experience. And in the meantime, a lot of money will have been spent on anticipating and finetuning the way your interactions pan out, so that you feel however you want to about the thing / friend / pet / God / clone-twin in question. (Since forever, people have had beliefs. Since the 90s, people have bawled their eyes out when a tamogotchi dies. Relationships, religion and perception matter; philosophy and reality, not so much.)
And finally, with apologies, a refusal to engage with tight definitions and distinctions. I’ll use ‘AI’ in the broadest sense to cover a multitude of sins and underlying, often very disparate technologies and tools. Bandwagons roll!
Ok. So what’s interesting right now?
I’ll kick off with four questions that have preoccupied my poor addled mind so far.
What can’t (yet) be ‘tokenised’ or digitised - and what are the implications?
Without wanting to get sidetracked by some predictable paean to the ineffable aspects of human interaction or creativity, what are the aspects of experience that can’t be fed into a generative model? What might these blindspots mean when it comes to the tools we build? What advantages (eg. decision-making minus personal politics) and disadvantages (eg. my LLM will never successfully write an opera) result?How are individuals and societies likely to change?
The small matter of macro and micro social predictions. But can we get really grounded and confident in those suggestions? (As an aside, science fiction is always supposed to be about the present. But we’re now in a moment where speculations about powerful machines can offer a bit of a roadmap for our near-future.)
What are the physical boundaries (and potential shocks) for this technology?
Given that we’re likely to become more enthralled by and reliant on the power of AI/machine learning, what risks are we storing up when it comes to things like chips, electrical power and precious metals?
What are the behavioural, social and practical boundaries?
How can the new tools and tech be hacked and subverted? What use might criminals, the mad, the inspired and the perverted find for them? Every leap forward will have a dark side; and every misappropriation could unlock a social advance or positive use case.
Will social forces - in some nations at least - successfully resist new technology to keep more people working and learning in traditional ways? Or will each society preserve a version of this model for people to holiday in?
That, for now, is it. I wrote this in the old-school manner: with no co-pilot or filter. Not a sentence I’d have expected to write a year ago. And perhaps an indicator of the way use cases for generative tech might develop: sometimes we want to draw, handwrite, or fly solo; sometimes we don’t. But the end result is what counts.
BEgin with behaviour
The three key strands of this blog: behavioural insight informs ML tools; new tech shifts behaviour; disinformation crisis.
There’s three things going on here. On the one hand, it’s the ongoing process of getting every area of business to catch up with the insights of behavioural science. The development of AI tools is one of those areas. Then (we’re going to end up with three hands but never mind), the new feedback loop where technology begins to shift our patterns of behaviour - all layered on top of more basic evolution. Finally, the rather more urgent social crisis of disinformation and related behaviour.