Forgive me if I’ve missed something, but this is essential to me. If AV-C can’t show the arguments for CAVS not being good enough, the credibility of AV-C should be questioned. If you’ve come to the conclusion that CAVS still isn’t good enough based on tests, please clarify that you have performed such tests. If it’s just a guess that CAVS isn’t good enough, then I wonder what I would need AV-C for.
I’d like to ask for a few simple clarifications: Does TOS [Andreas, aka IBK] mean Terms of Service? Does the ToS in question proscribe an NDA requirement? If so, is the NDA for both parties? Thanks & sorry if it’s already stated somewhere.
A product doesn’t become good enough just because some people argue that it is. It either is or is not.
Nothing personal against you or anyone else, but the thought that there are people who doubt that CAV is ABSOLUTELY GODAWFUL right now is rather funny. It’s like asking for proof that fire is hot, or that the sky is blue.
This doesn’t change my point solcroft. One shouldn’t say that CAV is “bad” (or, to be honest, “good” neither) until it has been properly tested. I’m one of those who think CAV is good beacuse of the little fractions of info here and there, plus, it’s nicely backed up by Defense+.
If CAV has been tested and the conclusion is “this isn’t good enough to present for the public”, say so (although, if it’s true, I think it’s a strange decision not to present the results). If not, stop guessing please.
I’m not sure how was Andreas’ post ambiguous on this matter. CAV does not reach the minimum criteria to be “presentable to the public”, as you put it. And to arrive at that conclusion, a test was performed.
AFAIK Andreas does not publish the test scores of products that don’t qualify. While I don’t personally think that’s a strange decision, sometimes I too wish he doesn’t exercise this policy - I’m particularly interested to know how VIPRE scores.
If I’m buying a new car, I’m interested in the Euro NCAP crash test rating. If they haven’t rated the car, that means they haven’t tested it. If they have tested it, and it turns out to be very unsafe, they publish it. That’s how testing should be done, in my opinion.
If CAV doesn’t meet some minimum criteria to be used, I’m fine with that. But if it - according to some testing organization - doesn’t meet minimum criteria to be presentable to the public, I’m not really fine with that. But of course it’s a choice of the tester. I make another choice: trusting CAV, being a part of an otherwise strong solution (Firewall, Defense+).
I understand what you are saying, LA. You simply wish to see the results that lead to the given conclusion. If Andreas does not choose to make them public (his reasons), maybe he would share them with you privately? To me that would be reasonable.
I agree with John. And besides, the av-c people did say to email them if there were questions because they don’t read this thread much. LA, take up a personal email conversation with him and ask if you can quote from the emails any relevant data.
Sample sharing in the anti-malware community is based on trust. Comodo have no automatic right to receive samples from anyone, however good their intentions are: Andreas has a perfect right not to share his own samples, and doesn’t necessarily have the right to share samples received from elsewhere (read his methodology document).
The WildList Organization and AMTSO are not testing organizations. WildList is a source of a validated sample set used by a number of trusted testers and certifying organizations; AMTSO is a standards organization with the express purpose of raising testing standards across the board. If Melih doesn’t like the way tests are conducted in general right now, joining AMTSO could be worth the subscription fee to him, as long as he’s prepared to learn from the anti-malware community as well as making his voice heard. And anyone with an interest in testing issues can learn from the documentation they’re beginning to publish.
Whether CIS uses heuristics is not off-topic: it’s central to the question of whether its performance is comparable to commercial products. All modern mainstream scanners use, at the very least, basic passive heuristics, and some use much more proactive forms of analysis. It’s unrealistic to expect any scanner based purely on signature scanning to compete successfully in a competent comparative test, even one using purely static testing.