The article explains the vulnerabilities of AI systems
The article explains the vulnerabilities of AI systems, and recommends practices related to mitigating the cyber security challenges evoked by the integration of AI into national cyber defence strategies.
According to the UK National Cyber Security Centre, cyber security concerns the safety of personal information stored within devices and service applications, both at work and online.
The article explains that market growth for AI in cyber security is estimated to expand from US$1 billion in 2016 to a net worth of US$34.8 billion by 2025.
AI technology is increasingly being deployed to carry out cyber security tasks
AI technology is increasingly being deployed to carry out cyber security tasks, and is involved in the cyber defence strategies of governments worldwide, including the UK, the US, Australia, China, Japan and Singapore. The article states that this will “improve the security of critical national infrastructures, such as transport, hospitals, energy and water supply.”
Various governing bodies are in the stages of publishing standards and certification procedures to measure AI systems' capabilities to withstand erroneous data processing.
McCutcheon and his co-authors point out that it is the learning ability of AI applications which could be subject to cyber-attack by hackers. By adding erroneous data to the datasets used to train AI systems, or manipulating how an AI model categories data, hackers can gain control of a system and change how it behaves.
Given that attacks on artificial intelligence are difficult to detect, the authors of the article suggest that standards and certification procedures should concentrate on making AI systems more reliable. For this, they recommend that AI system providers should develop and train AI models in-house, and have 24/7, continuous monitoring to capture any difference in expected performance.
The cost of satisfying these preconditions is acknowledged by the authors to be difficult for smaller commercial companies to fully implement. However, as the article concludes, should the security of national critical infrastructures be protected in part by artificial intelligence, the standards responsible should account for how the software can be threatened.