Enough. Are We Implementing AI in Cyber through Rose-tinted Glasses

Vrizlynn Thing

Senior Vice President, Head of Cybersecurity Strategic Technology Centre, ST Engineering

A disturbing observation prompted me to pen down my first blog - more and more organisations are bolstering cybersecurity with AI, but without preparation to stand up against the adversaries of AI.  

 

There is no denial that Artificial Intelligence (AI) is playing a significant role in fighting cybercrime across multiple industries, with many businesses utilising AI to secure their organisations.  From enhanced detection and analysis of both attempted and new threats to picking up of behavioural abnormalities within an organisation’s network and endpoints, and down to flagging up a series of responses to combat threats faster than ever, AI seems to be the elixir to cybersecurity. The latest report from the Capgemini Research Institute, Reinventing Cybersecurity with Artifical Intelligence,conducted through a survey with 850 senior executives shows that:

- nearly two-thirds of the organisations don’t think they can identify critical threats without AI

- almost three-quarters of organisations are testing AI in cybersecurity use cases in some way, with fraud, malware and intrusion detection, scoring risk in a network, and user/machine behavioural analysis being the highest AI use cases for improving cybersecurity

- three in five firms say that using AI improves the accuracy and efficiency of cyber analysts

Your cyber team may be relying on AI to do the job, and lowering their guard. Therein lies the bigger question — Do you know if the AI you implemented is secure and hardened? Is it waiting to be exploited by hackers to use it against you? 

 

This is a ticking time bomb.

 

The Silent Attacks of Adversarial AI   

Adversarial AI in simple terms is the use of advanced digital technology and systems with intellectual processes to intentionally modify inputs and data, to trick your AI neural systems, resulting in irreversible, malicious outcomes. 

Attackers inject “adversarial data, visuals and audio”, making undetectable, minute changes to the AI model inputs, causing inaccurate identifications and influence outcomes without triggering any alarms or attacks. 

Imagine an autonomous vehicle. When hacked, an adversarial image of a road work sign is what it is to human eyes. To AI, there is no road work sign as it is being misidentified to interpret that there is no road sign. The consequences will be unimaginable.  Take another example, smugglers working with hackers to hoodwink a visual recognition system to allow undetectable illegal transactions across borders. Or a simple AI-enabled assistant device receiving a fake audio command to transfer payment.   

 

What Can Organisations Do?

For organisations who have already implemented AI, it is vital to take stock of their AI systems, prioritise those that are highly critical and highly exposed to the business, and review to harden the identified models.   

As part of the hardening process, organisations should take immediate action to:

- Limit the source of inputs to platforms

Manage and limit the permitted source of inputs helps deter adversarial attackers   

- Set the permissible or barred parameters of inputs

Have clarity and if possible, set the parameters of the normal inputs that feed the AI models, and determine the possible adversaries. Creating pre-emptive adversarial models will also help to probe its vulnerabilities.

- Formulate the de-noising filtration

Clean any unusual or unexpected signals, e.g. audio or pixels, that may cause errors, nullify possible manipulative elements

- Engineer resilient modelling and training structure

Structure a robust model and inject adversarial cases through training will shape and optimise the systems to best interpret them

 

For organisations who are embarking on the AI journey, the above actions will need to be weaved in when operationalising AI to counter the adversaries upfront. From the selection of the right use cases to collaborating with the various external stakeholders, from deploying orchestration to automation, and from managing the response to training your cyber analysts to be AI-ready, and setting the policies and governance, organisations will need to be robust to empower cyber resilience.  

Ultimately, before jumping on the AI bandwagon, business and security leaders need to ask the fundamental question on whether AI is the most fitting approach, map out a clear and long-term plan to pave the way for strategic collaborations and investments.

Till then, organisations will truly benefit through AI in cybersecurity only when secure AI models are put in place.