Newcastle visiting professor warns about trusting AI in cyber security

Gemisha Cheemungtoo discusses a visiting professor's warnings about the risks associated with using Artificial Intelligence in cyber security

Gemisha Cheemungtoo
2nd December 2019
A Lattice Modular Heli-Drone is displayed during a test run of the Lattice Platform Security System at the Red Beach training area, Marine Corps Base Camp Pendleton, California, Nov. 8, 2018. The Lattice Modular Heli-Drone was being tested to demonstrate its capabilities and potential for increasing security. (U.S. Marine Corps photo by Cpl. Dylan Chagnon)
In an article for Nature Artificial Intelligence Journal, Tom McCutcheon, a visiting professor in Computing at Newcastle University, and his co-authors, express their views about the danger of using artificial intelligence (AI) in cyber security defence.

The article explains the vulnerabilities of AI systems

The article explains the vulnerabilities of AI systems, and recommends practices related to mitigating the cyber security challenges evoked by the integration of AI into national cyber defence strategies.

According to the UK National Cyber Security Centre, cyber security concerns the safety of personal information stored within devices and service applications, both at work and online.

The article explains that market growth for AI in cyber security is estimated to expand from US$1 billion in 2016 to a net worth of US$34.8 billion by 2025.

AI technology is increasingly being deployed to carry out cyber security tasks

AI technology is increasingly being deployed to carry out cyber security tasks, and is involved in the cyber defence strategies of governments worldwide, including the UK, the US, Australia, China, Japan and Singapore. The article states that this will “improve the security of critical national infrastructures, such as transport, hospitals, energy and water supply.”

Various governing bodies are in the stages of publishing standards and certification procedures to measure AI systems' capabilities to withstand erroneous data processing.

McCutcheon and his co-authors point out that it is the learning ability of AI applications which could be subject to cyber-attack by hackers. By adding erroneous data to the datasets used to train AI systems, or manipulating how an AI model categories data, hackers can gain control of a system and change how it behaves.

Given that attacks on artificial intelligence are difficult to detect, the authors of the article suggest that standards and certification procedures should concentrate on making AI systems more reliable. For this, they recommend that AI system providers should develop and train AI models in-house, and have 24/7, continuous monitoring to capture any difference in expected performance.

The cost of satisfying these preconditions is acknowledged by the authors to be difficult for smaller commercial companies to fully implement. However, as the article concludes, should the security of national critical infrastructures be protected in part by artificial intelligence, the standards responsible should account for how the software can be threatened.

(Visited 69 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *

ReLated Articles
magnifiercross
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram
Copy link
Powered by Social Snap