On Foresight: 5
To recap: long-term foresight has typically relied on scenarios, supported by wide-ranging intelligence-gathering, imaginative human talent and storytelling.
This is changing. Specialist data and applications are emerging as governments, corporations and financial institutions try to minimise the risks of systemic failure. The unwritten goal is ‘no surprises’. In parallel, inventors and entrepreneurial investors are looking for an edge in the endless search for advantage through new products and services.
Weak data, incomplete models and fundamental principles stand in the way. After all, the future is open. Models cannot capture the complexity and chaos of the real world. They are always vulnerable to ‘externalities’. The most well-known example is the pre-2008 economic models that excluded the financial markets and the role of money.
Models also break down in the face of shocks and the re-adjustments that follow, as multiple ‘actors’ re-think their strategies and behave in inherently unpredictable ways, often behind the cloak of secretive playbooks, opaque culture, or hidden financial exposures. At best, they are approximations.
Even so, on the horizon, are AI-driven foresight tools and techniques, a vision of a future world where man and machine work together and everything is about simulation, predictive systems and human imagination.
Human, Machine: the emergence of AI-driven foresight
Time was when the state-of-the-art mathematics and data-based prediction systems were confined to short-term financial market trading by elite ‘algo’ hedge funds. The number of so-called ‘alternative data’ analysts has risen rapidly as asset managers look for an advantage on short-term price movements.
AI-driven media analysis tools now occupy niches between the short and long-term. To illustrate, machine learning can detect hidden themes within ‘unstructured’ text, how these relate to influential ‘actors’ within networks and interpret emerging risk and opportunities. There are examples of machine learning reducing the uncertainties surrounding the outcome of the UK general election, using sampled Twitter data. AI is also revealing hidden intentions in ‘set piece’ text about policy, such as central bank minutes.
The problem is that there is little ‘news about the future’, or ‘future data’, from which machine learning can generate some form of predictive analysis. AI in its current forms, depends on historical data and often static relationships to derive future projections.
One of the clearer exceptions to the ‘AI works only short-term’ narrative is intellectual property (IP) landscape analysis, which can typically provide evidence of the shape and structure of future markets years, if not decades, ahead. Clusters of inventions reveal hot spots of innovation, as well as interconnections that together show where multiple inventions might be integrated to create new families of products and services. They can also identify ‘white space’ – potential gaps at the intersection of technology and latent market demand. Both are important, but white space has the potential to act as the catalyst for creative thinking and direct investment priorities.
One of the explanations for this is that academic and patent literature is a rich source of codified, forward-looking evidence that lends itself to machine applications. Coupled with ‘computational creativity’ techniques, we can see the emergence of one of the most important ‘AI-foresight’ narratives which will drive innovation and so in turn wealth-creation and investment opportunities.
Similar ‘narrow’ applications are demonstrating predictive power, highlighting, for example, which inventions will lead to patents and successful product trials, in say healthcare. These applications can evaluate, at an early stage, which inventions and products will make progress, from the likelihood of an idea getting traction through citations in the scientific community and being taken up by industry, to clinical trials and ultimately gaining validation for general use. These techniques are important because the vast scale of academic papers and patent literature is overwhelming.
There are other narrow applications where patterns in human behaviour, say in the aggregation of lending and credit intelligence can be extended to macro-level to simulate consumer confidence. Human behaviour can be predicted with increasing accuracy, particularly from video feeds, opening up more accurate projections of future markets. More prosaic applications, nonetheless important to insurers, include predicting driver safety. The more data, the better.
Macro-level services are improving and quantum computing may deliver step changes in vital fields, such as the biosciences. Meantime, current network and complexity analysis software can highlight levels of fragility and identify the type of system instability that may trigger a sudden transition from a stable to chaotic ‘state’. These applications may, for example, map the underlying stress levels in corporate performance, or in a market sector, pointing to potential downturns or shocks.
Where the data is available, these ‘model free’ techniques can reveal the underlying structure of networks, identify the ‘hubs’ on which stability depends, the dynamic relationships between them and what binds them together, in order to assess adaptive capacity. Greater complexity ultimately leads to fragility and ‘the edge of chaos’. Whilst they can help reveal that systems are vulnerable, they cannot isolate what might trigger collapse, though weak ‘hubs’ in the network are key indicators of systems in transition.
AI Limitations and Risks to Financial Markets
For all the progress, the problems run deeper. How can AI techniques, driven by data and models become convincing and so pervasive, particularly when they are opaque, difficult and sometimes impossible to understand, even by their designers? Asking decision makers to take action on long-term view or outcome when the arguments are shrouded in mystery algorithms is not a viable strategy.
Similarly, AI does not yet reveal narrative structures. Since narratives shape human behaviour and, for example, political relationships within systems, this is critical. This amongst other things puts boundaries around the art of the possible, as we live in a ‘system of systems’ world, where everything is increasingly interconnected. Even market prices and the value of money are emergent properties of the complex interaction of many dynamic forces. As David Orrell puts it, prices ‘are not the optimal result of a mechanical, Newtonian process, but an emergent property of the money system’.
Paradoxically, simplicity is best:
‘Models are best seen as patches which capture some aspect of the complex system. The problem of the reductionist models of any sort is that as they are made more detailed, the number of unknown parameters, who values cannot be accurately inferred from the data, tends to explode. This is why ‘paradoxically, simple models outperform complicated ones’.
In other words, more sophistication, data and codification of key system variables may not lead to greater predictive accuracy or foresight. One reason is that models and prices are social constructs, or cultural products. They are anchors and frameworks of imagined futures and constantly changing narratives. They are also subject to herd behaviour. If a model narrative gains acceptance, it is seen not only to ‘work’, but it is self-referential. It has influence because experts in the system use it as a rule of thumb and guideline. A ‘theory is likely to be accepted if it tells a story that benefits a powerful constituency’.
In any case, decisions about prospects, say in the financial markets, or in evaluation of corporate prospects, rely on views about the future. They are about imagined futures, which can be opaque, uncertain or themselves subject to intelligence estimates that are more about human behaviour, judgment and decision-making expertise than some AI and mechanistic worldviews might lead us to think.
One last example. As several analysts have pointed out in the context of exchange-traded funds (ETF), the combination of so-called ‘black box’ algorithms as ‘actors’ in complex systems like financial markets and volatile, sometimes random human behaviour, create profound uncertainty. The ETF and algorithm narratives may break down. Prepare for the shock.
Yet for all the limitations, a future world where humans and machines work together and everything is about simulation, predictive systems and human imagination is fast approaching.
Scenarios will remain the most flexible set of tools and techniques, but AI and ever-improving data promise a future where machines and human creativity are combined, in increasingly sophisticated forms.
We welcome comments on all aspects of our editorial content and coverage. If you have any questions about our service, or want to know more, please e-mail us or complete our enquiry form:Submit an Enquiry