Long Read
AI Futures, Peace and War
Geostrategic security is in crisis at a time when urgent action on humanity’s intertwined environmental, technological and infrastructure systems is critical to long-term sustainability. The natural world, energy, food, water, raw materials, health systems and digital information networks are all in volatile transition. Scientific discovery and invention bring revolutionary progress, yet often generate unanticipated systemic risks.
Artificial intelligence (AI) and other technologies amplify challenges for the next decade and beyond, threatening to undermine security. Yet specialist applications promise to bring solutions: not just to address the root causes of conflict, but to transform governance. We make a distinction between AI systems as a force multiplier in conflict and war and AI as part of a set of methods and tools that will transform early warning and, in turn, create the system conditions and pathways to peace.
We believe that novel ways of thinking about possible futures—together with a new generation of knowledge engineering and AI-based simulation methodologies—have the potential to bridge gaps in understanding that are both systemic and strategic, spanning governance and diplomacy.
To put what follows in context, it is commonplace to frame multiple interrelated threats to resilience and peace, including AI systems, as well-defined future ‘risks’. Yet none can be seen in isolation.
Risks emerge in an ocean of uncertainties. They are best viewed as a fluid set of ill-defined system variables, with multiple interconnections and uncertain outcomes that change over time. Put simply, food, water and energy crises may lead to social unrest and vice versa. Yet system failures are not linear, as this suggests. This is a world characterised by latent and hidden tipping points and non-linear cascading events.
In this essay, we first explore the grand challenge of navigating radically uncertain futures. We then briefly draw some lessons from history. We illustrate some critical, highly contested system-level uncertainties—the outcomes of which will shape future outcomes—before outlining a scenario-based vision of some possible futures.
Grand Challenges
We make the case that AI-based predictive models and early warning on emerging climate, food, water, energy, transport, economic and demographic crises have the potential to transform the way we govern at national and international levels.
However, there is a more ambitious target: to frame governance and policy as system-level interventions set in long-term, scenario-based strategic frameworks. In our view, policy navigational models should focus on early anticipation and prevention of events, ultimately in real-time—beyond the conventional, static scenarios models in linear forecasts that have constrained agenda-setting governance for decades.
Only by simulating emerging events, possible shocks, surprises and ‘unknown unknowns’ can leadership teams develop shared understanding and recognise the risks of future systemic crises long before they begin to develop momentum.
We currently have too few tools for creating a shared understanding of how these complex, constantly adaptive systems may become linked over time and how cascading events may create runaway conditions. Science and technology accelerate at machine speed, but fast machines and slow governance are not formulas for sustainable development, long-term resilience or peace.
The next generation of strategic risk management must deliver a better understanding of systems, how events crystallise and propogate, and how imagined futures shape human responses.
The grand challenge is to reinvent governance to match the demands of a potentially chaotic world dominated by the cascading impacts of poorly understood, fast-moving events that threaten fragmentation and breakdowns in trust. In other terms, the challenge is to shift from short-term reactive policy to foresight-driven, agenda-setting and pre-emptive action.
Nowhere is the challenge as acute or urgent as in the deeply interconnected worlds of AI, peace and war.
History, Understanding, Imagination
There are many theories of how peace turns to conflict and war and how war ends and brings peace. This is not the place to describe them or to suggest that crises are the same, or even similar. Each crisis is unique.
There are, however, some lessons to be learned that have particular relevance to the decades ahead, and the urgency of reinventing governance and how we think about the future.
Historians often talk about failures of understanding and failures of imagination. Peter Frankopan makes the case that “Much of human history has been about the failure to understand or adapt to changing circumstances in the physical and natural world around us.”[1]
The expression “failure of imagination” dominates the narrative in reviews of catastrophic events over the last few decades, surrounding everything from 9/11 to Hurricane Katrina and the 2007-8 financial crisis. Many security and governance crises are collective failures by multiple protagonists. They are also rooted in methodological gaps and inadequate tools for exploring risk and opportunity within a shared, forward-looking, adaptive framework. They are, in a sense, systems failures.
We can learn from Christopher Clark and his examination of leadership behaviour in the run-up to World War One, in which the protagonists were:
…sleepwalkers, watchful but unseeing, haunted by dreams, yet blind to the reality of the horror they were about to bring into the world…
They were not sleepwalking unconsciously, but all constantly scheming and calculating, plotting virtual futures and measuring them against each other… I was struck by the narrowness of their vision.[2]
The paradox is that humanity’s defining talent at individual level is that consciousness is characterised by constant simulations of possible futures—as the latest models of neuroscience tell us. We are knowledge-seeking. Curious. We innately project our actions forward, looking for sources of surprise and shock to anticipate and adapt, in advance, to changing worlds.
In this context, imagined futures are cultural realities—mental simulations that shape decisions in the here and now. They are also contested and therefore a defining feature of uncertainty, driving everything from rivalry over ‘industries of the future’ and risks of misjudgment and misunderstanding in crises, to competing visions about the future of AI.
We have yet to scale our individual talent to meet the challenges ahead. We argue that this is one area where specialist methods, supported by AI systems, have transformative potential—enhancing rather than replacing human imagination and creativity.
Inter-Systemic Uncertainty and Risk
These perspectives have growing relevance. Never before have world leaders faced so many highly interconnected, fast-moving and accelerating threats. As interconnections in a system increase, so do complexity and uncertainty. Systems become more difficult to understand. Creating a shared social sense of reality—critical to political authority, trust and social stability—becomes increasingly difficult. Growing uncertainty feeds public feelings of disorder and threatens to undermine culturally embedded trust in traditional leadership norms and in institutional authority.
With the acceleration of AI, digital technologies, robotics and open media, the number of interconnections between people, places and things will continue to grow at exponential rates. Social media, which has transformed politics, global security and the wider media environment, is the most obvious example. At worst, for all its benefits, it has become the theatre for cross-border intelligence interventions, conflict and disorder but also, more importantly, a driver of uncertainty.
Against this background, we should reconsider the important and often ignored distinction between risk and uncertainty originally made by economist Frank Knight in 1924. Risk is best defined as situations where we can describe possible future events and their probabilities. This view dominates present-day governance and policy, despite the limitations. The reality is that crises are, at root, the product of what Knight called true uncertainty, where “we may not even be able to imagine all future events.” In the modern era, this is framed as ‘radical uncertainty’.
The problem is that a century after Knight, ways of thinking about governance and policy are not keeping pace with ever-increasing complexity, uncertainty and, above all, speed of change.
Critical Uncertainties
This brings us to some of the critical system-level variables and uncertainties, each with multiple outcomes over time. These outcomes and many more will shape the landscape of AI, peace and war over the next decade. To take a simple example, action on climate change may turn out to be ‘too little, too late’. Alternatively, exponential system-level innovation in green technologies may create drastic cuts in emissions and herald the emergence of a secure, sustainable world.
The variables and uncertainties illustrated here form the basis of the scenarios that follow. They range from geostrategic competition to climate security and power. They extend to the emergence of new generations of distributed, agent-based and autonomous AI systems, breakthroughs in quantum machines, and simulation-based predictive modelling.
If complexity, radical uncertainty and speed overwhelm the strategic governance agenda, then in some scenarios AI systems themselves will widen the ‘sphere of misjudgment’ in political and security systems, in part because they are opaque. Social media, generative AI, quantum computing and new cyber offensive weapons threaten to destabilise states, creating the system conditions for societal breakdown.
Looking ahead, geostrategic disorder and chaos may become the new norm, with pervasive conflicts in cyberspace and over supply chains and natural resources. Integrated AI, quantum, neurocomputing and cyberwar technologies may become the defining source of tension and driver of conflict between major powers. In a world of multiple, AI-driven weapons, inventive non-state actors may add to the volatility.
The pivotal uncertainty is the extent to which complexity, uncertainty and speed will overwhelm global leaders and governance institutions at national and international levels. In the absence of a widely shared sense of vision and purpose, a world of unintended consequences may perpetuate tensions between political, public, humanitarian and security interests and well-financed technology companies.
Power relations between states, non-state actors and the major technology and media companies are in the midst of a volatile contest, as illustrated by the Geopolitical Lens released in the 2023 edition of the GESDA Science Breakthrough Radar. Technologies and how they are used may come to dominate over ideology. At the heart of the uncertainty is how states, international institutions and private companies find workable governance solutions. Might we see the development, for example, of distributed digital worlds, regulated within global norms but controlled at state level, with ‘sovereign’ data and AI systems managed within boundaries?
As the digitisation of core infrastructure systems—energy, food, water, transport and media—gathers pace, the unresolved threats to security and social stability may grow. Core infrastructures may become part of the volatile and widening digital ‘theatre of conflict’ that extends from media and communications networks to command and control, all enveloped in a web of semi-autonomous AI systems that threaten to create their own realities, transcend boundaries and elude security and governance systems.
In some conflict scenarios, infrastructures may be seen as legitimate targets. Hybrid conflict—from propaganda and mind-control to clandestine sabotage ‘below the threshold of war’—may gain momentum.
Another critical uncertainty is the extent to which AI will amplify the risks of misinterpretation, miscalculation and misjudgement, particularly in the world of military decision-making, peacekeeping and intelligence. At multiple levels, AIs are opaque. Transparency, explainability, provenance, authenticity and trust in current mass-market AI products and services are elusive. They may remain so.
In the wider context of open media, AI applications are—barring solutions to the challenges of large-scale real-time curation—a pervasive technology that threatens to undermine the foundations and fabric of society through the manipulation of language and amplification of false narratives. This is deeper than conventional debates about misinformation and disinformation. To quote George Orwell:
In our age there is no such thing as ‘keeping out of politics’. All issues are political issues, and politics itself is a mass of lies, evasions, folly, hatred and schizophrenia. When the general atmosphere is bad, language must suffer.[3]
This is all the more important because distinctions between machine intelligence and human intelligence are easily blurred. Machines can deliver human-like output yet cannot be explained in human terms, even by their designers. We are vulnerable to treating AIs as if they were human, with human ethics, emotions, motives and intentions. This is critical: we share meaning through language, image and story, and so risk ceding control to machines.
This stretches to mind and social control. There have been many warnings at the intersection of social media, generative AI, national elections and global security, but policy and regulatory responses are lagging behind the rate of innovation.
The interplay between policy, regulation and these critical uncertainties may be resolved in different ways over time. In the face of growing climate crises, governments may take draconian steps to secure access to food, water and core infrastructures, curbing the development of high-energy AI systems. The fragmented risk landscape may again be dominated by geography and national boundaries.
In parallel, specialist high-trust AI and data networks may dominate ‘mission-critical’ areas such as finance, insurance, infrastructure, aviation, shipping, health and security communications. Internationally agreed ‘guardrails’ may embody rules about AI, cross-border cyberwar and disinformation.
Resilience, Simulation and Hedging
The following scenarios draw on our library of simulation models that map multiple ‘high impact’ variables, each with potentially extreme outcomes that will develop and interact in novel and often surprising ways over time.
There are thousands of possible scenarios that may emerge over the next decade. We illustrate just three: Dark Ages, Walled Gardens and Renaissance. These are deliberately extreme possible outcomes that may develop over time and are designed to inspire dialogue about what sort of world we want.
In Renaissance, we explore a future in which the system conditions and pathways to peace might emerge. In this positive vision, AI systems have the potential—when deployed with human expertise, imagination, inventiveness and a shared sense of purpose—to transform governance, policy-making and diplomacy at national and international levels and, in turn, create prospects for peace.
To put what follows in context, any viable organisation must be able to cope with the dynamic complexity and uncertainty of its future environment. The same applies to states, cities, non-profit organisations, corporations and humanitarian agencies most concerned with creating the system conditions for peace.
The starting point is a shared understanding of possible futures. We define resilience as adaptive policies that work in even the most extreme possible scenarios. In practice, ‘adaptive’ does not mean rapid responses to events but collective action on simulations and foresight in anticipation of crises. The alternatives inevitably lead to ‘too little, too late’.
Dark Ages: towards 2035
Dark Ages is a world of climate and biosphere chaos, deepening regional conflicts over natural resources, pervasive wars in cyberspace, tensions in technology supply chains, crises in military security and radical innovation.
This is a world driven by unresolved geostrategic conflict, inequality, austerity, cultural division, escalating trade wars and economic stagnation. Overstressed financial markets are in continual crisis, pervaded by hidden complexity and denial of the looming impacts of biosphere failures on asset values and stability. The destructive power of state-backed propaganda, proxy wars, mass-scale manipulation and secret cyberwar fuels the fire, amplified by machine intelligence.
Amidst the chaos, power is fragmented amongst the major states and growing numbers of autocratic, isolationist regimes, corporate power and finance. Malign actors, from criminal networks and other non-state as well as state proxies and fast-moving virtual terrorist groups, leverage freely available AI models and open social media networks. This is a world where inter-systemic failures are pervasive, transforming the conflict landscape. Intelligence and hybrid wars escalate.
Open, unregulated social media, waves of generative AIs, quantum computing and new cyber offensive weapons destabilise the major powers. Integrated AI, quantum, neurocomputing, ‘cognitive war’ technologies, disinformation and cyberwar emerge as the primary sources of tension and drivers of conflict between major powers. The unregulated digital environment generates ever-greater complexity, uncertainty and speed, increasing the ‘sphere of misjudgment’ in security systems.
Walled Gardens: towards 2035
After the wars in Europe and the Middle East, political turmoil in the West and amidst growing tensions in Asia, Africa and Latin America the world steps back from the brink, averting further conflicts and open war. Yet conflicts beneath the threshold of war remain. Intelligence and military security frame relations. ‘Everything wars’ fall short of open conflict, shifting to intelligence and battles over high-end chips, raw materials and key ‘technologies of the future’.
Self-reliance reframes trade. Walls go up. While core internet-based infrastructures remain, national security and the deepening threats of cross-border cyberwar and propaganda limit digital collaboration.
Data and AI-based services are defined by state and regional ideologies and cultures. Over time, the dominance of global media and technology service providers wanes. The world is divided between the digital haves and have-nots.
As growing heat, food and water shortages disrupt industry and global supply chains, and amidst more frequent major climate events, states withdraw behind strong borders. Virtual, albeit fragmented and much diminished, digital cross-border trade remains.
The crises give way to a new narrative: green, post-industrial nationalism and relative calm, at least for some. The combination of localised, fully electric microgrids, solar, wind and battery storage creates self-sufficiency and resilience at all levels—from small rural communities to cities. Energy costs drop. The transformation improves prospects in the global south and reduces threats of conflict.
The emergence of what becomes known as walled gardens concentrates power in states capable of managing the dual transition to green infrastructures and digital systems. The threats of inter-systemic, cross-border and climate-induced conflict give way to a world of islands of relative stability and oceans of crisis.
Renaissance: towards 2035
Picture this. As the security, climate and digital systems infrastructure crises crystallise, the major powers find common ground through a mutual understanding of the scale and severity of the situation facing humanity.
Shared views and interests around the ‘planetary commons’—the biosphere, oceans, polar regions and space—drive cooperation on climate, bio-security, and around humanitarian principles. Healthcare moves centre stage. Reinvented international institutions find new influence and authority.
As fears of the weaponisation of AI and autonomous systems crystallise, a new generation of multilateral governance mechanisms emerge. New rules for trade, finance, data and strategic technologies support collaboration on AI, particularly in early warning systems around environmental systems and threats to peace.
In the face of accelerating climate change, governments take urgent and radical steps to secure global access to food, water and energy. Digital systems transform health and core infrastructures, and national and global mass migration shapes resilient havens.
The fragmented risk landscape is defined by geography: the impacts of global climate and biosphere crises are experienced locally, and many of the solutions are therefore local. The use of high-energy AI systems is restricted as high-security, decentralised and, ultimately, distributed digital platforms and AI models gain traction.
Power structures mirror the emerging digital landscape—globalised, networked, distributed and decentralised—driving a collaborative era of alignment between leading technology companies and states. Despite tensions, the system conditions emerge for ‘good AI’.
The era of ‘one-size-fits-all’ search and evolutions of generative AI co-exists, but the shift marks a turning point. Intelligent, inter-operable autonomous agents, screened for safety and licensed, emerge at multiple levels. They both learn and adapt to community and state-level cultural norms. Embedded within the new generation of agents, sets of rules cover everything from national laws to ethics. Humanitarian principles of impartiality, neutrality and independence form the backbone of new systems. The so-called ‘Spatial Web’—in which digital information is integrated with physical objects, interconnecting people, places and things[4] —gains momentum.
The system conditions for AI evolve, bringing AI applications and services to the global south and small communities alike. A world of pervasive, open predictive modelling systems shifts the emphasis from data extraction and surveillance to early warning of food, water, energy, health and environmental threats, transforming biophysical systems and contributing to creating the conditions for peaceful coexistence.
At national, regional and international levels, governance is aligned around early warning and policy interventions at system levels, framed by long-term strategic frameworks and sustainable development principles. The focus shifts to anticipation of possible crises and to pre-emptive multilateral action based on shared understanding and transparency.
Nowhere is the impact of the shift in the culture and intentions of states and technology companies towards sustainable values more marked than in humanitarian support for the poor, the displaced and the victims of conflict and climate crises. From an era often characterised by the ‘de-humanising’ influence of technologies, the focus turns to protecting the interests of victims in crisis situations and to delivering early warning to avert crises. Long driven by security applications and priorities, technology is turned to sustainable development and humanitarian principles—to creating the conditions for peace and to save lives, reduce suffering, and improve health and well-being.
Conclusion
As these scenarios illustrate, humanity has many choices between multiple alternative futures. Renaissance describes one of many possible pathways to a more sustainable world.
To change systems of governance, the challenge is to change cultures of governance and entrenched short-term worldviews. We must introduce new ways of thinking about the future and simulate everything from future cascading climate and systemic risks to scientific discovery and breakthrough invention. This must focus not only on historical data, specialist research, or scientific models but also on possible futures—secret worlds where there is little evidence but infinite possibility and potential.
As we suggested earlier, the challenge is to frame governance and policy as system-level interventions, set in long-term, scenario-based strategic frameworks. In our view, policy navigational models should focus on early anticipation of events, ultimately in real-time—beyond the conventional, static scenarios models that have constrained agenda-setting governance for decades.
September 2024
Footnotes
[1] Peter Frankopan: The Earth Transformed, 2023.
[2] Christopher Clark: The Sleepwalkers: How Europe Went to War in 1914
[3] George Orwell: Politics and the English Language, 1946.
[4] https://vision.hipeac.net/the-next-computing-paradigm-ncp–the-spatial-web.html
Contact Us
We welcome comments on all aspects of our editorial content and coverage. If you have any questions about our service, or want to know more, please e-mail us or complete our enquiry form:
Submit an Enquiry