Okay, so in response to this I’m just wondering one thing (at least at the moment).
What do you think is the best way to measure an anti-malware product? Be as specific as possible.
Also, bear in mind that I want the results to be usable for both a novice user, and an advanced user. Thus the results should be able to be interpreted for both groups.
Please let me know what you come up with below.
I use 3 systems (either physical or virtual) to test;
- system A has no security software installed (system A is used to act as both the control and as the locator)
- system B has the software to be tested installed (system B is used to test prevention)
- system C also has the same security software as system B installed (system C is used to test detection if an infection occurs on system B)
- System A is used to locate malware. As each instance of malware is located, it is allowed to act. The URL of the malware is recorded. System A is reset to original state.
- The URL of the malware is entered into the address bar of the browser on system B to test if the security software can prevent the infection occuring. System B is reset to original state.
- If system B could not prevent the infection occuring AND the malware can be saved or extracted as a discrete file system object, it is copied to system C where it is manually scanned to test detection. System C is reset to original state on completion.
Steps 1, 2 and 3 are performed in order for each malware object to be tested. It does require a system reset of all test systems at the end of each stage, but ensures that each system is “fresh” at the beginning of each test cycle and eliminates the possibility of cumulative instability that can arise from runnning X number of malware sequentially on the same system in the same session.
P.S. This is an overview only on the major test phases we employ.
It is not possible to test an Anti-Malware product in a way that would give a useful comparison with other products because it would take far too long. How is it possible within a realistic period of time to check protection against millions of threats during real-world usage (which is how real-world infections occur) and then duplicate those tests with other products? It is rather like saying what is the best way to test hardware reliability. It cannot be done because even the most intensive tests only ever check a tiny fraction of all possible situations.
The closest you can get to very large scale real-world testing of multiple products is to use results from surveys.