Results of protection against threats from the list of CERT Poland

23 December 2021

Since the November edition of long-term Advance In The Wild Test, we have made some significant changes. As observers of the virtual world, we already know that cyberthreats from year to year are becoming more complex. They require a new approach to protection, so what was effective 5 years ago, today does not deliver the expected results. As the AVLab Foundation for Cybersecurity, we also have to develop to provide better feedback and recommendations of good security products, making the Internet safe and free from threats.

We used additional sources of malicious software to conduct our protection in November 2021. In addition to our honeypots, we now operate with, among others, CERT Poland to obtain virus samples  for testing (see details in our methodology).

We have added Yara rules to our testing system to automatically reject bad malware samples, and to classify them more quickly and simply have fewer false positives. In future editions, we will enhance our tools, but also testing methods. Each subsequent edition teaches us how to work with developers and new things that we can implement in our methodology.

Moving on to the substance – from November 1 to 30, 24 hours a day, our system was testing the following solutions:

  • Avast Free Antivirus
  • Avira Antivirus Pro
  • Comodo Advanced Endpoint Protection
  • Comodo Internet Security
  • Emsisoft Business Security
  • F-Secure Total (new!)
  • Malwarebytes Anti-Malware Pro (new!)
  • SecureAPlus Pro
  • Webroot Antivirus

Levels of blocking malicious software samples

We share checksums of malicious software for researchers and security enthusiasts by dividing them into protection technologies that have contributed to detect and stop a threat. According to independent experts, this type of innovative approach of comparing security will contribute to better understand differences between products available on the market.

Each malware sample blocked by tested protection solution has been divided into few levels:

  • Level 1 (L1): The browser level, i.e., a virus has been stopped before or after it has been downloaded onto a hard drive.
  • Level 2 (L2): The system level, i.e., a virus has been downloaded, but it has not been allowed to run.
  • Level 3 (L3): The analysis level, i.e., a virus has been run and blocked by a tested product.
  • Failure (F): The failure, i.e., a virus has not been blocked and it has infected a system. 

Malicious software

In November, we used 1773 malware samples for the test, consisting of, among others, banking trojans, ransomware, backdoors, downloaders, and macro viruses. In the contrast to well-known testing institutions, our tests are much more transparent – we share to the public the full list of harmful software samples.

The latest results of blocking each sample are available at in a table:

Summary of the November 2021 edition of the test

We have been testing on Windows 10. Our observation shows that Windows 10 is currently more stable than Windows 11, so it was good idea to not migrate immediately. In addition, we have noticed that while the security results of Microsoft Defender in Windows 10 against Windows 11 do not vary, we have observed that not all features in certain security software work as they should when running different type of tests. We do not know if this is due to still underdeveloped Windows 11, or the problem is that the protection products are not sufficiently tested.

In our test, we have noted malware samples that have caused trouble for tested protection solutions. Taking into consideration the good of society that is online security, the relevant technical data has been reported and discussed with developers resulting in the correction of detected errors.

Also, please note that the correct interpretation of the test is not clear at first glance. Yes, all applications have achieved almost a 100% protection against so-called samples in the wild which can be found in the Internet on daily basis. These samples are often similar to each other, and detected by generic signatures. During the test, there are also threats that are not recognizable by the producer concerned.

This is best shown by the so-called Level 3, because during this analysis a virus is already running, and we expect the tested application to react correctly. In other words – if a sample hasn’t been blocked by scanning in a browser or when it has been moved from A to B location on the system disk, after it has been run we can conclude that the file has not been known to the producer. For this reason, malware could not be detected without a prior, deep scan, so the signature protection was insufficient. Not every antimalware product can analyze a file before it is saved on the computer disk, so it sometimes has to be run locally or in the cloud to perform automated analysis.

Moving on to the substance of this summary, it is worth noting the Level 3 which shows real protection against 0-day threats. But be careful! Some antimalware applications do not have browser or even signature protection which could result in a lower score if the test was not interpreted correctly.

In our tests, we do not award negative, or positive points (if a threat has been blocked early). Antimalware solution should simply block a threat in any manner intended by a producer. The final result is whether or not, it was possible to do this correctly using any protection modules.