By S4AllCities partner ATOS
With the ever-increasing technological modernization of the people all around the globe, come new threats. All kinds of sensitive, meaningful and important data are stored in laptops, tablets and mobile phones, whether in physical drives or in the cloud. That could be personal data like pictures, texts or videos; financial data like bank accounts and credentials, or company data like wages, blueprints, plans and designs. For that reason, the preservation of privacy is of outmost importance. The security of our data must be among top priorities because a threat to this privacy poses a threat to our freedom and utterly, to our society.
That is why governments and companies must spend resources in cybersecurity, and all different actors must contribute to it. In a recent article published by New Europe featured an interview with the Vice President of the European Parliament and software engineer Marcel Kolaja (Geropoulos) which discussed about the update to the Network Information Security Directive. In his words, cybersecurity could be described as a race between the engineers of malware and the cybersecurity professionals: an ever-evolving chase where attackers try to exploit system’s vulnerabilities while the software engineers try to figure out ways to prevent, stop and mitigate the attacks. This European directive aims to impose a list of minimum-security requirements to be met with a risk-management approach.
The S4AllCities project, one of the initiatives financed by the Europe Commission in pursuit of the security of our society, features a couple of cybersecurity use-cases where state-of-the-art research is applied:
A. Early cyber-incidents detection by means Artificial Intelligence. A team of partners led by the University of Bournemouth is developing an anomaly detection module that uses a combination of Machine Learning algorithms for monitoring network traffic data in search of unusual behavior. There is more information about this in a previous post that is referenced here (Bournemouth University).
B. Adversarial attack countermeasure. With the increasing spread of Machine Learning algorithms for detection, recognition and validation tasks in several security applications, a new type of attack has appeared. The so-called adversarial attack is a physical attack that aims to confuse Machine Learning models by altering the input in a way that would make the internal operations of the model to malfunction. In the case of computer vision applications an adversarial attack to a person-detecting network could consist of a printed pattern that once showed to the camera it makes the model to not detect any person. This sophisticated, physical attacks are expected to be widely used in the near future as the use of Machine Learning models continue to grow. In this project the partner Atos is researching ways to countermeasure these kinds of attacks by creating models resistant to adversarial deception.
Figure 1. Universal Adversarial perturbation for GoogLeNet (Moosavi-Dezfooli, Fawzi and Fawzi). This pattern has a high probability for confusing GoogLeNet at classifying all natural images
References - Bournemouth University. Advanced Cyber and Physical Situation Awareness in Urban Smart Spaces. 18 May 2021. Security for All Cities. August 2021. <https://www.s4allcities.eu/post/advanced-cyber-and-physical-situation-awareness-in-urban-smart-spaces>.
Geropoulos, Kostis. EU to counter cyber threats. 23 July 2021. New Europe. August 2021. <https://www.neweurope.eu/article/eu-to-counter-cyber-threats/>.
Moosavi-Dezfooli, Seyed-Mohsen, et al. “Universal adversarial perturbations.” Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. 1765-1773.