I was simply pointing out that detection rates are not based solely on a vendor's ability to gather malware, as you claimed. It may be the cause for low detection rates, it may be part of the cause for low detection rates, or it may not be a cause at all.That wasn't the point I was trying to make
. Despite all this statements about legitimate and not discriminatory selection criteria this only means that not all kind of restrictions can simply motivated by detection rate tests.
I feel such considerations more appropriate if such tests were only meant to score detection rates and not to to establish if an AV brand is eligible to receive samples (always provided that a private property can be denied for whatsoever arbitrary reason, if no reason at all).
I think it can. The criteria were created to prevent abuse, not to discriminate against specific vendors. I imagine the restrictions were put in place to ensure that vendors who participate in the sharing do actually have competent virus labs, and that the testers themselves do not become virus collectors for vendors who have no ability to do so themselves. And once that very non-discriminatory criteria is established, there seems to be no further conditions barring a vendor from receiving samples from the tester - short of concerns about the vendor's ethics, of course.
I would imagine any detection rate test to not be able to tell the difference between AV vendors whose malware collection infrastructure is influenced by malware sharing partnerships and other AV vendors.
Thus while it could be useful to score how much an AV protects against known samples IMHO it doesn't tell much about how competent a specific brand AV lab is if it is not possible to exclude any bias from possibly existing cross-vendors partneships.
It look like you are sure that each vendor gather samples indipendently without whatsoever private agreement sharing aid and each direct(eg own userbase submitted samples) or indirect contribution (eg partners userbase submitted samples) collectively amount to an irrelevant part (I wonder how much amount can still be considered irrelevant).
There is no need to cite any specified AV tester either but I would like to know what you consider a non discriminatory selection criteria for sample disclosure along with a description how it could be possible to really measure how much competent a single virus labs is alone without relying on any from of partnership.
You can then leave it to other readers the effort to verify if any tester or vendor does meet your suggested criterias.
Regarding your "nowhere-to-be-found" samples: if they were really nowhere to be found, they wouldn't have ended up in the tester's sample set in the first place.
I don't have hardcore statistics that are 100% verifiable, if that's what you're asking for. I do think, however, that isn't a problem for a vendor unless it is less than 12 months old.
Such clean-cut logic doesn't address the fact that not all malware can be found for an extended timeframe.
Apart from recurring treats, that could be also years old I have yet to confirm how many samples in any AV testbeds were available at least for a week.
I have yet to understand then if an AV company that fails to gather a sample in a week (or any meaningful timeframe) doesn't qualify as trustworthy or got bad virus labs.
AFAIK AV testbeds are not designed to test that and yet I wonder if anyone could possibly use them to score trustworthiness or virus labs competency.
Again I wonder assuming that most malware is regionally targeted if Cross-AV partnerships could prove useful to increase malware gathering geographical coverage and I wonder, if malware gathered by user submitted samples is influenced by cumulative market-share of all partnering AV brands, how much this does add to each single AV vendor competent virus lab.
sharing criteria exist only between testers and vendors, as defined by the testers themselves - let's not confuse and lump this together with between vendors themselves. Researchers share samples among themselves with their colleagues from other companies if considered trustworthy; I believe I've discussed this with you at length before.
That's a circular reference
besides I guess you consider private agreement malware sharing being motivated by trustworthiness alone whereas business logic also includes other restrictions like for example if existing partnerships already fulfill an AV brand needs.
A vendor fully detecting the samples submitted by its user base doesn't necessarily make its product's detection rate reflective of the overall malware population, but if it can fully protect its users there would be little need for it to bother with what samples are inside testing organizations' sample sets at all.
What about vendors involved in private agreement partnerships? The more vendors involved in sharing partnerships the more the collective sampleset is likely reflect the overall malware population.
Comodo insists on picking up every unknown file from its users' systems, in addition to manually submitted samples. But even among user submitted samples (at least from me) I see an average of 30% detection after several weeks. Hence my claim that a company can have some well-polished collection mechanisms and still fail at detection.
I would be highly interested to know if there is any AV tester that actually measure how much time each AV brand does it need to issue a signature for all samples sent after a comparative (in case he/her do share samples of course) besides I see you wish to peruse signature creation speed and neglect the malware gathering aspect as irrelevant.
In the era of fast-flux domains? Not very long, but definitely long enough to infect users. And certainly long enough for those malware to end up inside the collections of testers and antivirus vendors alike. Come on, now. They don't vanish instantly. How do you think anyone got those samples at all? Time machines?
Yep not very long. I cannot possibly know how long a malware site will last either but again I wonder what will happen once an AV vendor get a sample and how private agreement partnerships affect the subsequent steps.
I wonder if as long the sample exists and some AV vendor pass a test it doesn't really matter what happened in between. Numbers tell the truth I guess and sure they do about detection rates.
If it is really all that matters.
Once again you make it sound as though there's an insider's clique of corporate bigwigs among the "big boys" who scheme and conspire to decide who gets samples. On the contrary, it's the tester who establishes a public baseline that even the so-called "big boys" need to toe in order to receive samples from the tester. There's absolutely nothing hush-hush and backstage about it.
Once again IMHO the whole point is still if sample sharing should be regulated by private agreements whereas biological viruses are treated in a different way for obvious reasons.
I see even more from you arguments that such malware disclosure practices are so bound to the current system that it looks almost no one is left to question it.
I still consider signature creation speed tests to be useful as a possible way to score AV vendors as an alternative to absolute detection rate tests in a totally different AV ecosystem where malware gathering is not such a limiting factor with so many unclear aspects.
Again I still wonder what people would think if a pharmaceutical company could not get a sample to research and develop a new vaccine because it has to prove, for example, how many other vaccines it has already developed. But this will never happen I guess, no way it will be endorsed the same clique that affects the AV ecosystem.
After all I guess computer viruses are treated as a second rate threat whereas their biological siblings evoke totally different considerations.
Every year, the World Health Organization predicts which strains of the virus are most likely to be circulating in the next year, allowing pharmaceutical companies to develop vaccines that will provide the best immunity against these strains.