Adversarial artificial intelligence.
Many technologies fall into oblivion after some time but this is not the case of Artificial intelligence which keeps on growing. Although AI has a wide range of functionalities it also entails certain vulnerabilities that must be counteracted against. A perfect example of the faults of AI is the study that has shown that innocuous stickers can confuse the recognition systems of autonomous cars.
After studying these problems, terms such as "adversarial attacks" have been generated to refer to the techniques that manipulate machine learning systems by entering data into an algorithm. In this way, machine vision algorithms can perform what has been called reverse engineering. In this way, the algorithm cannot recognize something, or even see what we are interested in LabSix researchers verified this by means of a classifier. Using a 3D printer they created false turtles that, for the human eye, could not be differentiated from the real ones, but in the recognition system they were recognized as rifles. This was possible because they manipulated the algorithm of the system by introducing a pattern so that the design of the shells of each turtle was detected as a rifle.
The most precise AI-based recognition systems are the most vulnerable ones to antagonistic attacks. According to Pin-Yu Chen, precision makes them more vulnerable so it is very important to find a balance between precision and "robustness" as protection for possible attacks.
Andrew Ilyas' team confirmed this by training an AI system to identify cats based on characteristics observable by humans (Consistent) and others not recognizable by our sight (Nonconsistent) and confirmed that the more consistent the characteristics that the sytemt took into account, the more vulnerable are the accurate recognition systems to committing an error.
The studies carried out by the Air Institute detect these AI problems and try to give answers to them with theoretical and practical solutions.