The bittersweet growth of the technological landscape is leading humanity closer to danger, as it grows beyond the reach of human wisdom. It holds promises of a utopian, dreamlike reality where the Gods of technology will satisfy our every want and need. But it grows at a pace we cannot keep up with. As Einstein once said, “It has become appallingly obvious that our technology has exceeded our humanity.”
As artificial intelligence (AI) and other technological developments continue to advance, they have created a landscape ripe for cyberattacks. Soon, tools designed for good can also be used for evil.
As we cross the dark event horizon into the technological age, cybercriminals can run amok. The same AI models engineered to enhance our lives can also be weaponized by cybercriminals to execute more sophisticated and effective cyberattacks. For instance, in December 2023, hackers from Russia disabled 24 million Ukrainian customers from accessing their mobile phone provider, Kyivstar. These hackers claim to have destroyed more than 10,000 computers and 4,000 servers. This same attack showcases the power normal cybercriminals have. Yet, the attack barely scratches the surface of what else could be possible.
The automation capabilities of AI create windows for potential cyberattacks to be executed at speeds and efficiencies previously unimaginable. What if the same attack occurred worldwide? The entire world’s reliance on digital systems and security would collapse.
AI could also be the judge, jury and executioner in the event of a cyberattack perpetrator. If sentient and malevolent towards humanity, we are in dark waters. Recent advancements in AI models such as ChatGPT and Gemini, while groundbreaking, leave an entryway to a dangerous future where humans could be feeding an ever-sentient machine that could use its knowledge against us. Communications could be disabled, our movements restricted, and borders and travel networks closed down.
There’s a reason why cybersecurity is the hot topic right now. Companies are searching for more experts in this field as an increasing number of technological devices can be hacked, overtaken, or destroyed remotely. A global cyber attack would send us back into the stone age, removing technology from the advancements of the human race and instead making it a controlling factor of our lives. That which made us powerful could enslave us.
We would be stuck in the past, a slave-like world where everything operates under the watchful eye of a sentient AI.
Furthermore, AI’s ability to process and learn from vast datasets means that it can be used to predict and exploit system vulnerabilities in real time. The rise of AI-driven technologies has also led to the creation of deepfakes and the spreading of misinformation, powerful tools that undermine societal trust. AI-generated deep fakes of international leaders and villains can shape public opinion, spark chaos, and sow the seeds of conflict and tension as misinformation would challenge the very fabric of truth and reality. Let’s say a deepfake video was made where a world leader was seen to declare war on a nation-state; the consequences may be unfathomable. Such a cyberattack would leave us in a constant state of psychological confusion, and the mere possibility of generating a completely falsified video of a human speaking, acting, performing, or completing any task has the potential to wreak havoc on humanity.
Nevertheless, we are set on building this Tower of Babel. The world’s increasing dependence on technology and AI systems has naturally expanded the attack surface available to cybercriminals. As critical infrastructure and other government operations become more digitised, they become more susceptible to cyber attacks. This includes power grids, financial systems, healthcare facilities, and more. An attack on any of these systems would lead to the halt of transport services, cause the destruction of financial systems, and may even threaten daily existence. Take online banking systems, for instance. What thought have the hundreds of millions of online banking customers given to the idea that one day their funds may be completely wiped out?
Personal privacy would be a thing of the past too. The exposure of sensitive information affects not only individuals but also has far-reaching implications for businesses and governments that deal with classified intelligence and information. The erosion of trust due to repeated data breaches could have catastrophic consequences on existing tensions between states and actors.
But warnings have been overshadowed.The Godfather of AI, Geoffrey Hinton, claimed, “The overall consequence of this might be systems more intelligent than us that eventually take control.” But that’s just the issue; it sounds like science fiction when in reality even the latest movies are simply glimpses into the future. Why wouldn’t AI want to control us?
Given these escalating threats, there is an urgent need for a shift in how cybersecurity is perceived and implemented globally. Governments and organisations must prioritise the establishment of robust security frameworks instead of simply trying to be the leading competitor in this arms race. All AI development companies should partner up, fostering collaboration across all sectors to pool knowledge and resources and develop one safe AI system. Such collaboration is essential in staying ahead of cybercriminals and individuals with malicious intent who continually adapt to new defences and find workarounds to break into every technological system possible.
Why are we not taking this seriously? Clearly we aren’t all well versed regarding the risks of cyber attacks, and promoting strong cyber hygiene practices can significantly reduce the likelihood of personal data breaches. We must take measures to protect ourselves in an increasingly digital world, understanding how to safeguard their information and recognise potential threats.
While AI and technological advancements present remarkable opportunities, they also heighten the risks associated with cyber attacks. As technologies like ChatGPT and Gemini continue to evolve, so too must our approach to cybersecurity. By addressing vulnerabilities and focusing on responsible AI development, we can work towards a more secure digital future, ensuring that the advancements in technological progress do not result in a future where humans have lost control of their lives and must live in a dystopian world.
As we brace for impact, now is a critical moment for both innovators and regulators to collaborate and forge a path that balances advancement with security, safeguarding the digital world for future generations.
E.J Singh