The world’s first computer virus was created in 1971. Named after a character on ‘Scooby Doo,’ Creeper spread through computers on ARPAnet, the Advanced Research Projects Agency Network — an early packet-switching network, sending out a harmless taunt, “I’m the creeper, catch me if you can!” Creeper then led to Reaper, the world’s first antivirus program. And thus, the arms race between hackers and security experts was born.
In the 1970s, few could imagine today’s onslaught of hacks, data breaches, viruses, malware and fraudsters. In the same way, we can hardly image the cyber attacks of tomorrow.
Last year Symantec determined that 978 million people across 20 countries were affected by cybercrime, costing $172 billion, or $142 per person. The use of Artificial Intelligence (AI) in breaches and attacks is also on the rise — by 2021, global spending on cognitive and artificial intelligence systems could reach $57.6 billion.
Some believe that once the era of AI attacks begins, there will be no turning back. If these machines are smart enough to blend into the background, there’s no telling what they can infiltrate and what they will do once they’ve breached a system.
The Rise of AI Attackers
In a vacuum, machine learning is a remarkable breakthrough. A computer could sift through millions of medical records, refining its knowledge until it knows exactly how to treat a particular patient with heart disease. But in a less benign scenario, a machine could teach itself to attack, launching ever more sophisticated assaults as it learns new skills.
Sound farfetched? Machine learning is used every day by personal assistants like Alexa and Siri. Similar to a child learning to walk, machine learning applications train a computer to complete certain tasks by trying — and failing — over and over. While these powerful assistants can automate many of our more burdensome tasks, they could also be used for nefarious purposes.
Imagine a computer that uses machine learning to infiltrate a power grid — launching numerous attacks and learning from its mistakes until it finally breaks in and takes over. News of such an event swept the internet last year, heralding a real-life version of Skynet. According to alarmist headlines, Facebook had to shut down its AI project after the robots invented their own creepy language.
The real truth was less sensational. Engineers had not incentivized the robots to speak in English, so they invented their own shorthand to speak to each other. What may have looked scary was really just two robots talking in their own language about how to trade everyday items like bats, balls and books.
While the whole incident was relatively innocuous, it did raise concerns about how quickly two robots could develop a form of communication that was incomprehensible to humans.
Nefarious Uses of AI
This year, researchers from Oxford and Cambridge warned that AI could be used to hack drones and autonomous vehicles, producing the possibility of rogue bombings and intentional car crashes.
For example, Google’s Waymo self-driving cars use deep learning, and that system could potentially be tricked into interpreting a stop sign as a green light — causing a deadly crash. Or, a swarm of commercial drones could quickly launch a coordinated attack on an unwitting target, allowing for plausible deniability as to who caused the attack. And, as airplanes become increasingly automated, there’s always a chance that a terrorist with hacking skills could hijack or crash a plane cruising at 10,000 feet.
These intelligent machines could also lower the cost of carrying out cyber attacks — automating certain tasks and scoping potential targets on their own.
For many, aspects of our personal lives are already automated, with virtual assistants and IoT-connected devices. However, this necessitates that a large amount of personal data reside in the cloud. This complex network of connections will create new levels of vulnerability, with cyber attacks hitting much closer to home — for example, criminals, terrorists or rogue governments could target a world leader’s pacemaker, or other internet-connected medical device.
Fifty years from now, keys and alarm systems may not be enough to keep us safe at home. Rogue bots could continuously scan networks, searching for vulnerabilities. In this way, an attractive target is simply a vulnerable one. Which means anyone could become a target.
AI makes spearfishing more effective and efficient, but it could also be used in the political sphere for advanced surveillance, targeted propaganda and spreading misinformation.
These novel attacks will benefit from AI’s improved ability to analyze human behaviors, moods and beliefs. Imagine realistic videos of state leaders making inflammatory comments they never actually made. For example, AI has already been used to superimpose images of one person’s face onto the body of another.
What Can Be Done?
Combating the threats posed by nefarious AI will involve concerted efforts from all the stakeholders involved. Cyber defense systems will need to become more advanced so they can deal with large amounts of data. And they’ll need to be able to analyze and act in real time.
To cope with such a large amount of information, we’ll have to rely on artificial intelligence to help us make decisions, and next-generation cyber experts will need to develop and drive these new systems.
Countries will need to expend vast resources to protect important systems like power grids and water supplies, and large corporations will need to protect data at every point in their system — cloud servers, personal computers and even the most inconsequential mobile device.
In the future, cyber defense systems will be smarter, more sophisticated and better equipped to handle vast amounts of data in real time. And as for individuals, data protection will be as commonplace as seat belts and bike helmets.
A Brave New Cyber World
Shortly before his death, renowned physicist Stephen Hawking warned that if we can’t learn to control it, AI could be the “worst event in the history of our civilization.”
To combat the threats posed by artificial intelligence, governments, corporations and law enforcement will have to build new AI tools that can compete with — and even outwit — these smart new machines. And thus, the age-old arms race between hackers and security experts continues.
Bluefin offers P2PE and tokenization services that ensure sensitive payment data is encrypted the moment it enters your system. To learn more about how you can protect your customers’ data, contact a Bluefin representative today.