Malware is still a big threat to network security, and various types of malware detectors are needed. Deep learning-based classifiers have substantially improved their ability to identify malware samples. However, these detectors suffer from adversarial examples. The samples were made by adding small, carefully selected perturbations to the normal software. Any vulnerability in malware detectors can pose a significant threat to the platforms they defend. However, existing attack methods may not meet the inherent limitations of malware. This paper proposes a new method to generate malware adversarial samples. The original malware is mutated into new samples through semantic analysis of malware, transplantation of code in the program and addition of code at the end. And used to fool the detector. Experiments show that compared with the existing methods, our method has a significant effect on the efficiency of generating adversarial examples and the success rate of the attack.
|