AV Comparitibes Retrospective Results For May 2008

Here you go. Avira still in the top 2 of course.

http://www.av-comparatives.org/

I wonder why they haven’t tested updated versions (Eg; Avira AV 8.1)?

Josh

When I last contacted Andreas, he mentioned that when the tests were started the new versions of those anti-virus had not been released.

Also, I didn’t “understand” this proactive test (2nd part of the test) that was performed by av-comparatives. The first one was an on-demand scan for malware. The 2nd was supposed to be a proactive test, still, it was an on-demand test as well. But this time with total different results.

I’m a security consultant, and one thing is an on-demand test and a completely different thing is a proactive test. Besides, Andreas makes mention to the fact “this test is performed on demand - it is NOT an on-execution/behavorial test[…]”

The 1st one was an on-demand test, so is the 2nd test. How the hell does it make it a proactive test?

Also the number of malware samples is quite different from one test to the other. The 1st one was performed with 1.683.364 malware threats and the 2nd one with 11.509 malware threats. I do realize Andreas mentions “NEW” threats on the 2nd test, but I believe the test should be performed with the same malware threats. Then if they wanted to perform a new test using new malware threats, it would be another situation. But using different malware samples from one test to the other, well, I just don’t considerer it “honest”.

Things are what they are, and that is just my opinion.

Very simple, he/they perform tests with months old AV programs on new virus samples…

max. 1 week old products (from 4th february) were tested against new malware (appeared between 5th and ~12th February), in order to test the proactive detection rate.

Thanks for correction IBK, Did you tried to test CFP D+ capability on some malware samples?

For this many samples, he would need an army of testers.
Because to test something like Defense+, you have to execute the malware.

Did you see word “some” in my previous question?

Sorry, i didn’t. While it may prove useful, i agree, given the sample size, it won’t be that significant.

It can be very significant for expert like he is, you can always use representative samples or interesting or rare ones to point on weakness…

Well, you know what “they” say: If you can’t do it, then don’t. (:KWL) (not talking about Defense+ (:LOV) )

I am refering to the two tests that were made. Both on-demand and not on-demand and proactive.

And tests such as the ones from av-comparitives may give a hint on which product to get, but they can also be quite deceiving. Why? Well, av-comparatives picked 11.XXX malware samples for the proactive(?) test, and they got a result. What if the malware samples were 11.XXX different ones. The results could had been different.

I would never pick and advise an anti-virus based on those tests, and that’s why I perform my own tests. Also, I believe that the home user has no need to pay for an anti-virus. There are free alternatives that combined together make one hell of a defense. And of course never forget that we always have to use our common sense.

Best regards

And how much different could someone else gather in time frame 5th - 12?

while it’s true that you shouldn’t hold too much store by a single set of results Avira appears around the top of just about every test during the last couple of years. :■■■■

And that makes me a smart boy. :slight_smile:

But really, didn’t Kaspersky use to be top-notch? And the “second contendant”, ESET NOD-32, detects only 24 per cent of “Windows viruses” etcetera? Strange results.