From misinformation and invisible cyber attacks, to irresponsible artificial intelligence that can cause multi-fatal events, futurists have predicted how rapid technological changes could shape our world by 2040.
As advances in computer technology accelerate and systems become more interconnected, it is essential to know how this rapid technological advance may impact the world in order to take steps to prevent the worst outcomes.
Using a Delphi study, a well-known forecasting technique, a team of cybersecurity researchers led by academics from Lancaster University interviewed 12 experts on the future of technologies.
The experts ranged from companies' chief technology officers, futures consultants, technology journalists and academic researchers. They were asked how certain technologies might evolve and change our world over the next 15 years by 2040, what risks they might pose, and how to address the challenges that might arise.
Most experts predict exponential growth in artificial intelligence (AI) over the next 15 years, and many have also expressed concern that the road to safe AI development could be cut short. They felt that this reduction could be driven by nation-states seeking a competitive advantage. Many experts even considered it possible that misapplication of AI could lead to accidents involving many deaths, although other experts disagreed with this view.
Dr Charles Weir, lecturer in the School of Computing and Communications at Lancaster University and lead researcher on the study, said: “Technological advances have brought, and will continue to bring, great benefits. We also know that there are risks surrounding some of these technologies, including artificial intelligence, and where they could go. Its development – everyone was discussing it – but the potential scale of some of the risks predicted by some experts was astonishing.
“But by anticipating potential risks that lie just beyond the horizon, we can take steps to avoid major problems.”
Another important concern expressed by most of the experts participating in the study is that technological advances will make it easier for misinformation to spread. This would make it difficult for people to distinguish fact from fiction – with consequences for democracies.
“We are already seeing misinformation on social media, and some nation-states are using it,” Dr. Weir said. “Experts predict that advances in technologies will make it much easier for people and bad actors to continue spreading misinformation by 2040.”
Other technologies are expected to have little impact by 2040, including quantum computing, which experts see as having impacts over a much longer time frame, and blockchain technology, which most experts have dismissed as a source of significant change.
Experts expect that:
· By 2040, competition between nation-states and big technology companies will cut off the road to developing safe AI
· Quantum computing will have a limited impact by 2040
· By 2040, there will be ownership of public web assets. They will be identified and traded through digital tokens
· By 2040, it will be difficult to distinguish fact from fiction because widely accessible AI could generate widely questionable content
· By 2040, the ability to distinguish between incidents and criminal incidents will be less due to the decentralized nature and complexity of the systems
Forecasters also offered some suggested solutions to help alleviate some of the concerns raised. Their suggestions included governments introducing AI safety principles, and new laws to regulate AI safety. In addition, universities can play a vital role by offering courses that combine technical skills with legislation.
These forecasts will help policymakers and technology professionals make strategic decisions about the development and deployment of new computing technologies. They are outlined in a paper titled “Interconnected Computing in 2040: Safety, Truth, Ownership and Accountability” published by the peer-reviewed journal IEE Computer.
The paper's authors are: Charles Weir and Anna Dyson from Lancaster University; Olamide Gogunola and Katie Paxton-Fair of Manchester Metropolitan University; and Louise Dennis from the University of Manchester.