Caroline Baylon, a member of the Secretariat of the All-Party Parliamentary Group for Future Generations (UK Parliament), and a Research Affiliate at the University of Cambridge’s Centre for the Study of Existential Risk (CSER), provides a fascinating response to the IMAJINE scenarios. Caroline’s research focuses on cybersecurity, AI, and defence topics.
I found the scenarios extremely thought-provoking. In particular, they raise a number of important considerations involving the likely evolution of technology, including cyber security and AI.
An interesting question generated by this scenario is how the AI algorithm determines what is ‘equitable’ when allocating wealth between regions. How might it treat regions that on the whole work longer hours relative to other regions? I could imagine a situation in which individuals within a region collectively decide to work less if they expect that the AI algorithm will give the region additional funding to compensate for their lower output, contributing to the stagnation described in the scenario.
We don’t necessarily understand how AI systems come to the conclusions that they do – that is, how they ‘think’ – and their outputs can surprise us. What if the AI system decided to allocate more funds to regions with fewer immigrants, deeming these regions to be more deserving of funds? Or to ones with a larger male population? AI algorithms learn to make predictions based on patterns that they observe in the datasets they are trained on, so they can replicate biases that are present in the real world. This includes displaying racist and sexist tendencies.
This scenario’s suggestion that we will see crime which attempts to manipulate sustainability and wellbeing ratings is insightful. We already observe publicly traded companies attempting to manipulate sustainability indices used by investors, so it is not hard to believe that some might decide to commit outright fraud, especially as these types of ratings grow in importance.
I would also envision cyber attacks designed to inflict ecological damage, e.g. causing a manufacturing facility to release toxic compounds into the environment. Cybercriminals might demand that companies pay a ransom in exchange for stopping or not carrying out such attacks.
Given that ICTs require a lot of energy to run and may emit more greenhouse gases than the entire aviation industry, I would expect to see greater uptake of green computing, or the use of computers in ecologically sustainable ways, in line with this scenario’s greater use of technologies to protect or mitigate the effects of climate change. This is likely to include a significant increase in green data centers, which use renewable energy or have components that can automatically go to sleep or turn off.
This scenario’s premise of blurred lines between corporations and government in the form of city-regions could come to pass. If the scenario were to play out beyond the 2048 time horizon of the scenario, I wonder if corporations might absorb governments entirely? Big tech companies are already taking on roles that are traditionally filled by government. In the healthcare sector, Amazon has rolled out its Amazon Care healthcare service, while Apple Health Records makes it possible for individuals to access all of their health records from their phones. Meanwhile, in the field of transport, Uber and other big tech companies are looking to provide driverless buses, while their driverless cars may come to compete with public transportation. It would not be surprising for big tech to further expand into other government domains, such as education and the justice system.
The idea of digital citizenship raised in this scenario is also likely; Estonia’s e-Residency program has already met with considerable success. However, those with oversight over digital citizenship will need to be vigilant as it may be highly susceptible to fraud, including as a vehicle for tax evasion.
This scenario’s suggestion that in some places even intelligent agents have rights, with mistreating Siri seen as equivalent to mistreating a pet, is intriguing. There is precedent for this, as the humanoid robot Sophia was made a legal citizen of Saudi Arabia in 2017. The suggestion raises some important questions: If an intelligent agent like Siri misbehaves, do you hold her responsible? How? Or does the culpability lie with the writers of her code? Since AI algorithms are constantly learning, are her trainers liable? What if she learns from vast numbers of people?
I would imagine that the disinformation alluded to in this scenario was a driving force for much of the fragmentation of European society described, with fake news from Russia instigating the election of autocratic leaders and fanning internal conflicts.
This scenario’s description of a movement that rejects telepresence technology, demanding real life interactions and not afraid to use violent means to achieve its objectives, is plausible. The movement might even grow to reject technology in its entirety, like the Luddites of the Industrial Revolution, feeling that it threatens livelihoods, and even society as a whole.
Find out more about the IMAJINE scenarios, and read the full scenario document, here.