Buch für Lagerstolper presents Leadership in Statecraft.Studies in Power.Edited by Kurt Almqvist, Alistair Benn and Matthias Heserus.Read by me, Helen Lloyd.Will machines make strategy?By Kenneth Payne.
On the streets of San Francisco recently, Waymo's ubiquitous autonomous cars hit a snag.By gently placing a traffic cone on a car's bonnet, protesters discovered a way to confuse Waymo's algorithm, stranding its cars mid-street.
It was a beautiful illustration of a more general problem.
Intelligent machines, as we now know them, are brilliant in structured worlds where there is a clearly defined problem to solve, like playing chess or working out the shortest route on a sat-nav.
And they are also increasingly adept at complex control problems, like those facing robotic surgeons or machines that speedily handle packages in warehouses.Navigating unpredictable human minds, on the other hand, is much harder.
That is a problem in all sorts of fields, where enthusiasts hope that AI might bring gains, like healthcare or education.Understanding minds matters here too, just as it does in warfare.
Today, AI drones can find targets and drop bombs as though a game of space invaders had been transposed onto the real world.These, though, are merely tactical control problems, ethically unsettling, certainly, but basically computable.
The larger challenge in war is to think strategically about aims and ways, both our own and those of our adversaries.To date, that has been much too difficult for AI.But now?It's a challenge I took up recently with the help of a large language model
of the sort that's currently getting a lot of attention in AI circles and the wider public.We had a fascinating discussion about Russia's invasion of Ukraine.
Fascinating, not least because its training data extended only until the summer of 2021, well ahead of the February 2022 invasion. The model knew nothing of what happened, but its insights were nonetheless remarkable.
At my suggestion, it adopted the probabilistic rubric used in UK intelligence assessments and assessed an invasion as likely, that is, in the order of 60-79% probable.
Language models are known to be somewhat inconsistent, but this one was resolute, sticking with that verdict on nine of the ten times I asked it.Among other factors informing that assessment were aspects of President Putin's personality.
That is, the machine explicitly engaged in reflections about another mind.Specifically, it highlighted Putin's defensiveness and sensitivity to slights, as well as his marked secrecy and tendency to manipulation.
In this, it agreed with the leading American-Russia analyst, Fiona Hill, in her deeply researched psychological portrait of Putin. but the model was not just parroting Hill's view back to me.
It took issue with some of her analysis, arguing that she might be oversimplifying Putin's complex psychological makeup, and pointing to the inevitable distortions that come from our own biases and our partial evidence. It was an eerie experience.
What did it mean?Was the machine really thinking like a human, empathetically imagining Putin's intentions?How far did it really understand his mind? AI skeptics often point to machines' limited grasp of meaning.There is something to this.
Success at board games and map reading requires aspects of intelligence that play to computers' strengths, prodigious memory, and enough processing power to search through vast amounts of data.
Ultimately, machines excel at finding patterns or correlations.That is meaning of a sort, but it is qualitatively different from the human version.Humans are embodied thinkers.
Meaning for us usually has some deeper relationship to our physical being, and it typically comes with an emotional hue.It is certainly a long way from crunching through possible chess moves.Can the gap be bridged?
My encounter with the language model suggested it might.Knowing me, knowing you.When we talk about meaning, we often mean social meaning.How to understand others is perhaps the essential cognitive challenge for humans.
The legendary Chinese strategic theorist Sun Tzu outlined the benefits in warfare.Know yourself and know your enemy, he counseled, and you will not be defeated in a hundred battles.
We are an intensely social species, whose success depends on our ability to cooperate and coordinate with allies, to exchange information, and to learn ratchet-like from the behaviors of others.It is an ability grounded in our human evolution.
There is an ongoing debate about the main driver for our distinctive mind-reading ability. Perhaps it was to coordinate hunting of large prey animals.
Perhaps it was to facilitate child-rearing, intuiting empathically what our big-brained, but essentially helpless, babies want, or working out who to trust with childcare.
Perhaps it was to collectively counter the threat of violent males, both inside our groups and outside.All these have been suggested by prominent theorists, and in truth, all are plausible reasons.
The basic problem for each human individual is the same.Who to trust?No man is an island after all. Human mind reading is typically a blend of instinctive empathy and conscious reflection.
It is channeled inevitably by our emotions and influenced by our underlying motivations.This is no dry logical model, but a living dynamic process of seeing others refracted through ourselves.
It helps explain many of our distinctive cognitive heuristics, those mental shortcuts we use, often departing from abstract rationality.
Hence, our tendency to groupthink or to value the opinions of similar others over a more rigorous, objective appraisal of evidence.That makes sense if what matters most for us to thrive is our group identity.
Our truth is more important than any objective truth, an observation that in these fake news times is uncontroversial, hence also our instinctive but imperfect empathy.It is all an attempt to get under the skin of others.
Human empathy is often fragile.It is easily undermined by stress, by anger, or when it comes to empathy with strangers simply by making our in-group identity more pronounced.
Our mind-reading, in short, may be imperfect, but it is often good enough for useful action.Machine mind-reading. Machines take a rather different approach to mind reading.
Their intelligence was not carved in deep evolutionary time down the same intensely social path we have followed.They do not share our embodied cognition either, with emotions helping us to prioritize in the service of deep underlying motivations.
Until recently, in fact, most AI acted without any attempt at mind reading at all. As Lee Seedle was trounced by AlphaGo in 2016, the world Go champion flicked an anguished look across the board at the human opposite.
But he was just there to move the machine's pieces. There were no mind reading insights to be had from gazing into his eyes.There was no other mind to be read at all.
AlphaGo triumphed against Seadol as AI had elsewhere by formidable powers of memory and searching ahead for valuable moves. That is now changing.Elements of mind are coming to AI, and in turn, they're affording it insights into our own minds.
As readers of Sun Tzu would anticipate, there are important strategic implications. One obvious way to model other minds is to think like them.From the beginning, AI researchers aspired to create intelligence that was human-like.
Their early efforts drew on formal logic.If this, then do that. Conveniently, this approach resonated with cognitive psychologists, who were much taken with the metaphor of human minds as computers.
So minds, artificial and human alike, could be modeled as symbolic processors. You could even introduce uncertainty to the models to approximate the messy complexity of the real world so that formal logic became fuzzy and so perhaps more authentic.
But humans are more than naive logicians.How to explain phenomena like our systematic overconfidence in our own abilities or our tendency to credit information that confirms existing beliefs rather than challenging them.
As psychologists unearthed cognitive heuristic after heuristic, it became clear that we possessed a distinctive form of rationalism far removed from the abstract symbolism of those early AI systems.
How, then, to design AI that could better emulate the manifestly crooked timber of humanity? Hand-crafting human knowledge for machines was the next attempt.It failed.
Expert systems were the focus of AI hype in the 1980s and could, under favorable conditions, make use of knowledge in a particular domain.But they were brittle, easily flummoxed by novelty.
Evidently, general purpose cognition requires much more than laborious coding of specialist knowledge.A more realistic alternative would imbue your AI with more general human heuristics, ideally many of them.
But these heuristics evolved in a particular context.Handcrafting them one by one is a formidable challenge. Anyway, the triumph of the human mind is to seamlessly integrate its multiple modes of thinking.
It is hard to parse one heuristic from the whole.They all emerge somehow from the same vast integrated network of embodied neurons and its trillions of connections.So, two very different sorts of intelligence with sharply contrasting attributes.
And there, until very recently, matters lay. I speak, therefore I am.Coding human heuristics looks like a forlorn endeavor.But perhaps there is a shortcut, language.
We can see language as a special sort of heuristic itself, one that models the world by categorizing, abstracting, conceptualizing, and establishing meaningful relationships. there is more.
If, as the anthropologist Robin Dunbar averse, language emerged in humans to gossip strategically about other minds, then mastery of language is, at least in some degree, synonymous with the acquisition of mind itself.
Language allows me to present myself as a single autobiographical whole and to understand others in similar terms, even though in reality we are both complex networks of interacting cognitions, many of them unconscious.
The ceaseless chatter in my skull, as Zen master Alan Watts puts it, is only part of what makes me myself, but a large part nonetheless.An example,
Scripts, or ingrained cognitive narratives, notably analogies, are much studied in international affairs.Leaders often draw, sometimes unwittingly, on the lessons of the past to guide them.
In deliberating on the Cuban Missile Crisis, Robert Kennedy counseled his brother against a surprise American attack.It would be an unethical Pearl Harbor.
Air Force Chief Curtis LeMay, by contrast, urged Kennedy to attack anything less, he argued, would be like appeasement at Munich.
In truth, leaders are just doing what we all do, stripping out the noisy clutter of reality in search of understanding and guidance. their use of language models the world and the various minds in it.
Could today's large language models, such as chat, GPT, do something similar?Yes.It is early and evidence is limited, but there are already some tantalizing hints.
Language models sometimes hallucinate crazy nonsensical answers and struggle with causal reasoning. but with careful nudging to break down tasks and explain their logic, they do better.
More intriguing still, in using language they appear to have absorbed some of our psychology. So, if we prime a language model with emotional terms, can we affect its subsequent decision-making, as might happen with a human?
Apparently so, according to one study, which found that anxious priming makes machines more biased in subsequent judgments, in this case becoming racist and ageist.
Another paper found that GPT-4 aced analogical reasoning tasks, surpassing human performance. Low-shot learning from a few historical examples is now possible for both leaders and machines.What about mind-reading itself?
In one striking study, GPT-4 passes theory-of-mind tests of the sort that developmental psychologists give to youngsters.That is, it appreciates that others can have mistaken beliefs, an essential cornerstone of strategy.
And in another dramatic illustration, language models outperformed humans in estimating the emotional response of characters in various fictitious scenarios.
The field of AI psychology is just getting started, but these are striking findings, with more arriving weekly.And in Meta's Cicero, we now have a practical demonstration of what is coming.
This hybrid AI has demonstrated proficiency in the multiplayer wargame diplomacy.The twist is that players communicate naturally about their intentions, introducing scope for deception.
Cicero wins by combining a tactical board game type engine, of the sort used so productively in chess and Go, with a language model. Online players were apparently unaware that their adversary was a machine.Clearly, something interesting is afoot.
The uncanny sense of a mind behind language models is more than the mere anthropomorphizing to which we are all susceptible.
Machines do not feel and did not evolve via natural selection, but they have imbibed the models of our world that are inherent in our language. Language, unsurprisingly, does not float free from the rest of our cognition.
It models the world and our experience of it.It captures, albeit imperfectly, something of our emotionally imbued connection to reality.In their linguistic facility, machines have acquired a window into human minds.
While they lack our rich, multimodal cognition, they have contrasting advantages.For one, they can master more languages than us and flick effortlessly between them, even developing their own.
In their use of coding language, meanwhile, they have a powerful means of bringing rigor to their thinking. and they have tremendous potential.Biology constrains human cognition in ways that do not apply to machines.
If mind and self are emergent properties of networks, as it seems, what happens as language models continue to scale? Reading Putin The implications for strategy are profound.The board game diplomacy is a pale simulacrum of real-life diplomacy.
No one, I hope, is seriously proposing to outsource geopolitical decisions to machines. That still leaves plenty of scope for machines to contribute, however.
My exchange with the large-language model about Putin's personality and the prospects of Russia invading Ukraine suggest one possibility. It is all too tempting to anthropomorphize AI.
One Google engineer shared his suspicions that his language model interlocutor had become sentient.Faced with a fluent, insightful, all-source intelligence analysis, one that offers plausible interpretations of other minds, it is tempting to agree.
Perhaps more so, if you think, as I do, that consciousness emerged specifically to track other minds. But these machines are not conscious, I'm sure of it.Language is only one, albeit important, facet of our subjective experience.
Still, my model was doing more than probabilistically matching words.Or rather, that is precisely what it was doing at a foundational level, but in doing that some new property emerges.
The model, with its vast corpus of training data, has captured an echo of human reasoning and insight. that allows it not just to regurgitate Fiona Hill's deeply researched and astute take on Putin, but to critique it and offer further analysis.
All this mind-reading might not be sufficient for truly general AI, the flexible, all-purpose intelligence that many AI researchers aim at.
There is more to human intelligence than mind reading, and human intelligence itself occupies only a small space in the universe of possible intelligences.
While they give a possible impression of insight, the suspicion remains that language models are lacking something important.Skeptics argue that they are inconsistent.Hit, regenerate, and you might get a radically different response.
Humans change their minds too, sometimes on a sixpence, but at a minimum they should feel a twinge of cognitive dissonance or shamefacedness about an abrupt vault fuss.Another criticism is that models lack imagination.
They are certainly plausible and fluent, but where is their creativity? There is not, or at least not yet, any landmark AI literature, music or artwork.Everything is a knock-off at some level of their ingested knowledge base.
Perhaps creativity demands more than facility with language, a more gestalt cognition maybe than can be captured in words. That may come in time, alongside a deeper form of mind-reading with the development of new AI philosophies and architectures.
Bio-computing looks like a promising, albeit long-range, candidate technology. In the meantime, we have language models that seem increasingly adept at human-like reflection.
If there are gaps in their ability to reason, and there are, we'd do well to remember our own shortcomings. We are no longer the sole strategic intelligence in town.Decision-making, including in war, will henceforth involve non-human intelligence.
It's an intelligence, moreover, that now manifestly possesses insights into other minds. We have created an artificial mind that knows itself and knows the enemy in ways that seem both eerily familiar and oddly alien.