With the rapid popularization of AI technologies such as autonomous driving technology, smart assistants, face recognition, smart factories, and smart cities, and the rapid growth of related security incidents, consumers and the industry are paying more and more attention to AI cybersecurity issues and threats. Continuous improving.
Recently, at the request of the European Union and the US government, Adversa, a trusted AI research and consulting company, published the industry’s first comprehensive research report on the security and credibility of artificial intelligence. The report also combines Gartner’s relevant predictions and recent AI research reports. Of confrontational attacks.
Oliver Rochford, a former Gartner analyst on Adversa’s advisory board, pointed out that building trust in the security of machine learning is essential. We ask people to believe in the black box of AI (which is difficult). In order for the AI revolution to succeed, we must build trust… AI faces too many security risks, and at the same time, the benefits are huge.
Eugene Neelou, CTO of Adversa, said: “In order to increase security awareness in the trusted AI field, we launched a project more than a year ago to analyze the development of academics, industry, and government over the past ten years. The results are staggering. The system generally has security and deviation issues, as well as the lack of appropriate defence measures, but people’s interest in AI security is growing exponentially. Enterprises should closely track the latest AI threats, implement AI security awareness programs, and protect their AI development life cycle. The important thing is to start now.
The report shows: The security situation in the AI field is extremely bad; AI security research papers have exploded in the past two years; the United States, China, and Europe are fiercely competing in the field of credible AI research, and China is accelerating to overtake; the field of AI artificial intelligence is facing the top ten security Threatened.
The following are some of the highlights of the report, and the safety cattle are organized as follows:
The reality of AI security incidents is growing rapidly:
In the automotive, biometrics, robotics, and Internet industries, AI security incidents in the real world are growing rapidly. As early adopters of AI, the industries that receive the most attention are the Internet (23%), network security (17%), biometrics (16%), and autonomy (13%).
In the past two years, the government, academia, and industry have published as many as 3,500 research papers on AI security, which is more than the sum of the past two decades. The fierce competition between the United States, China, and the European Union is expected to continue in the credible AI competition: the United States publishes 47% of research papers, but China is gaining momentum.
AI is not yet ready for hacker attacks:
The artificial intelligence industry is not yet fully prepared for real-world hacking attacks. On average, the 60 most commonly used machine learning (ML) models have at least one security hole.
The AI technology field most targeted by attackers is computer vision:
The most targeted AI field is computer vision (65%), followed by analysis, language, and autonomous systems.
Images, text, and records are the most vulnerable AI data sets:
The AI applications most frequently attacked for image classification and face recognition :
The Internet, network security, biometrics, and the automotive industry are the hardest-hit areas for AI network security issues:
AI faces ten major security threats:
- Bypass attacks (manipulate AI decisions and results through adversarial samples) 81%
- Poisoning attacks (injecting malicious data to reduce the reliability and accuracy of the AI system) 6.8%
- Inferred attack (infer whether a specific data sample is used for AI training) 3.5%
- Backdoor attack 2.3%
- Model extraction attack (exposure of AI algorithm details through malicious query commands) 1.9%
- Attribution inference attack 1.3%
- Trojan attack 1.2%
- Model reversal attack (infer input data from public output data obtained by malicious query) 1.2%
- Anti-watermarking attack (bypassing AI system’s detection of copyright and authenticity) 0.6%
- Reprogramming attacks (change the AI model for illegal purposes) 0.2%