AI in Cyber Security: A Technology for Good And Evil

In recent years, there has been an increasing number of cyber-attacks targeting organisations both in the public and private sectors. With a growing number of such attacks witnessed, combined with added risks of the cloud and the Internet of Things (IoT), organisations are finding it increasingly challenging to safeguard their systems against sophisticated and machine-speed attacks.

Attackers target systems in sectors such as banking, transportation, law firms, military, academia, and hospitals, to name few, in order to exfiltrate sensitive and confidential data. A successful attack against an organisation can easily lead to reputational damage and subsequent adverse effects such as hefty fines and loss of customers. Through attacks such as WannaCry (a form of Ransomware attack), adversaries were able to compromise hundreds of IT systems. Similarly, through NotPetya (a form of Wipe attack, shifting the motive from financial gain to data destruction), cyber criminals were able to inflict a heavily financial damage on Maersk, a global shipping conglomerate, Merck, a pharmaceutical giant, and a subsidiary of FedEx.

According to Accenture 2017, at a global level, cyber crime causes multibillion-dollar losses to businesses, with average losses per organization running from US $3.8 to US $16.8 million in the smallest and largest quartiles respectively. As a result, Artificial Intelligence (AI) is increasingly being utilised to address cyber attacks and enable organisations to enhance their network defences and mitigate the impacts of cyber attacks.

Artificial Intelligence – A Force for Good

AI has received a great deal of attention from security researchers both in academia and in industry and is increasingly having a great impact in various fields such as Cyber security. When deployed together with traditional methods, AI could be a very powerful tool to assist organisations in protecting their systems against cyber attacks.

Using the AI’s Machine Learning (ML) algorithms, computers can sift through a large volume of data to produce insights in milliseconds without being programmed to detect a particular cyber threat in advance. In contrast, this could take humans many years. The AI’s ML and Deep Learning (DL) technologies can learn and differentiate network patterns. Using ML and DL, AI technologies can assist organisations in addressing the heavy lifting of separating anomalies from normal noise more efficiently, thus detecting, predicting and responding to cyber attacks both autonomously and intelligently while they occur in real time.

For instance, we can take billions of data points and feed them to the AI that can recognise patterns that humans would not necessarily notice, prioritise them and decide which one of them is false positive and which one is a real threat. This can reduce the speed of traffic to the affected parts of the network before any data can be compromised, enabling the security teams vital time to solve the issue before it is too late.

Artificial Intelligence – A Force for Evil

Despite its many benefits, AI can also be used for many malicious purposes, for instance, by allowing attackers to carry out faster and more impactful attacks. A recent report published by Capgemini Research Institute, reveals that hackers can successfully use AI algorithms to send spear phishing tweets at a rate six times faster than a human and with twice the success (Capgemini Research Institute, 2019).

Furthermore, AI can detect and exploit network vulnerabilities that are often overlooked by humans to launch a targeted attack. Robots with autonomous functions are also being increasingly used for military combat while little to no attention is paid to implications of their ability to make autonomous decisions.

For example, they can find power sources on their own and independently make their own logical decisions on choosing whom to kill and whom not to kill. AI has also been used in surveillance, spread of fake online content, infringements on data privacy, autonomous vehicles, and drones that don’t require a human controller. Moreover, AI is increasingly being used in facial and voice recognition systems often susceptible to human biases and errors, where the data used to train the AI systems can itself have biases.


Cyber attacks can wreak havoc to global stability and have devastating impacts on our societies. In recent years, we have witnessed a series of system breaches in organisations (often holding millions of records on almost everyone) that spend millions on cyber defence strategies.

AI-enabled cyber security is becoming increasingly necessary in order for organisations to be able to protect their systems against cyber attacks. This is because AI enables us automatically to detect security breaches and attacks and allows us to develop our security measures. However, similar to any other technologies that can be deployed for both good and evil, AI is not an exception.

AI poses many security challenges such as its application in the spread of fake news or in cyber attacks. There also remain many unanswered questions regarding the ethics of AI such as whether AI will be used as a substitute for people in positions that require respect and care. The debate on whether AI-enabled robots will overtake the world or not continues to dominate many conversations amongst those who favour and condemn the application of AI. Thus, given many uncertainties and challenges ahead, AI must be carefully managed by developing a set moral framework, laws, and policies that govern its usage in cyberspace and beyond.

To this end, AI developers have an ethical obligation to be transparent in their efforts, and governments have a duty to establish specific and robust policies as a means for transparency and human accountability regarding the application of AI in cyberspace. This will require collective efforts by policymakers worldwide who must come together to discuss and work on ethics and codes for AI. Failing to do so will lead to an AI-weaponised and unsafe cyberspace.

Join the Discussion

Your email address will not be published. Required fields are marked *