Service to human race or fame seeking selfishness?

Testing organizations may feel entitled to set arbitrary minimum requirements because samples are considered private properties or are regulated by private agreements.

So it is possible to set “Minimum requirements” that include samples that cannot possibly be found in the wild anymore thus restricting sample sharing to established brands without even the need to explicitly state that.

AFAIK inter-AV vendor malware sharing is usually regarded as a private agreement and thus a new AV brand may not be entitled to cooperation because another brand got enough partners.

IMHO the whole point is if sample sharing should be regulated by such private agreements whereas biological viruses are treated in a different way for obvious reasons.

Every year, the World Health Organization predicts which strains of the virus are most likely to be circulating in the next year, allowing pharmaceutical companies to develop vaccines that will provide the best immunity against these strains.

Having a sample it is only the first step to research and build an appropriate countermeasure being it an a AV signature that only work on a specific patented AV engine, a removal application, a patented heuristic detection engine or a patented HIPS technology.

While that’s a possibility, I’m sure you’re aware of the problems that arise from handing out samples to simply every Tom, Dick, and Harry.

gibran, given today’s rate of malware growth, I assure you there is no need to keep any samples more than 3 years old (at most) in the test set, nor would it be feasible to do so. AV-Test regularly tests with samples no older than 12 months, while ~60% of AV-Comparatives’ testbed for the last review were no older than Oct '07. The myth that testers test with obsolete malware that cannot be found anywhere is a one that needs to die.

I guess the possibility is that such restrictions could not be imposed only on single individuals or real rogue AV developers.
Today this is legit. It’s their samples they choose the restrictions.

I take an effort to read methodological papers when I’m able to find them even if they could be difficult to understand as I’m a plain end user. If you know about a specific AV tester who does this and describe his selection criterias I will gladly add it to my favourites and even more if there is a methodological description about the minimal requirements testing procedures that could be possibly used to restrict sample sharing.

Anyway IMHO the whole point is still if sample sharing should be regulated by private agreements whereas biological viruses are treated in a different way for obvious reasons.

Yes, but the thing to note is that the restrictions are not discriminatory. There is no attempt to block or encourage sharing of samples among select vendors by the tester because of any personal gain. Any vendor who fulfills the criteria gets the samples - and most of them do.

For my post I headed to the AV-C online results page for a quick verification of my facts. AV-Test is a bit trickier, as the translated links seem to have expired for now. Both PCMag and VB claim that that samples are max 12mths old when quoting AV-Test results, but I don’t have a methodology listing at hand right now.

This is way different than using a criteria that restrict sample sharing to from single individuals or real rogue AV developers.

Eg using a minimum detection rate as a selection criteria means that the sampleset composition only evaluate malware gathering abilities and possibly include an unverified number of nowhere-to-be-found samples even if the sampleset composition is not older than 12 months (if those time related selection criterias are ever documented).

Malware gathering abilities can also be affected by inter-vendor sharing private agreements and related marketshare in case of user submission (eg new samples are submitted to a specific AV vendor and shared among partners), or timerelated availability (eg a new sample is submitted and shared among partners, the new sample got detected and exterminated or the vector sites are shut down) or spreading abilities (low spreading samples are likely to be exterminated faster).

Business logic and private agreements have much more effect on the AV ecosystem than the pharmaceutical counterpart.
It goes without saying that this will continue as long malware will not be considered a first rate threat.

In the current situation like AV engines can be treated as Intellectual Property, malware samples are treated as a private property and thus is legitimate for the holder to claim certain exclusive rights.

On the other hand if a pharmaceutical company develop a vaccine for a virus it can file a patent and get exclusive right on that specific cure.
This will not prevent another company to research that specific sample, develop a different cure and patent it.

False. To repeat myself again, the ability to gather malware is only one of the factors in determining detection. A lot of factors come into play - it’s not as simplistic as “get malware, add detection”. The manpower to process samples and the quality of the scanning engine are also key factors in determining how well a product performs. In his post Melih claims to be able to detect 100% of the malware they have seen. I know for a fact that this is a lie; they’re nowhere near to detecting even half the samples I’ve sent.

A company can have some well-polished collection mechanisms and still fail at detection. Comodo is one example, given the automated submission system built into CFP. Until a short while ago, PC Tools (with their ThreatExpert system) is another, though they’re still nothing to shout about now. And then there are some vendors who deliberately scale back the detection they can achieve due to various factors, such as to avoid false positives; McAfee, F-Prot and Trend Micro are three such vendors that I know of, but I highly suspect all major vendors do this to some degree. For a short while Symantec deliberately scaled back the full power of their heuristics engine during the NIS2009 beta as well, perhaps due to similar concerns.

Comodo is not a newcomer to the antivirus industry. They’ve been around for years, and already have an entrenched and well-deserved reputation. If their malware collection infrastructure is still as roughshod as when they first started, it’s nobody’s fault but their own, and certainly not because they’re suffering from the disadvantage of being a “new” company.

Not quite. Nobody “owns” malware except for the guys who wrote them (and even that is debatable), and neither do testers charge an additional fee for distributing samples. There are no exclusive claims to malware, contrary to your claim; just because you found one sample doesn’t make it yours. I believe you’re getting confused between this so-called “exclusive rights” to malware, and the act of handing it out to untrustworthy parties. I really doubt testers don’t distribute samples to vendors who don’t meet the minimum criteria because they think the samples are “theirs”; if that’s the case, they wouldn’t distribute samples at all.

I see you are shifting the point and take another chance to shoot at Comodo. What you describe would be the situation in which malware samples are shared without restrictions among all AV parties. In that case it would possible to reliably score such aspects.

I wish to point out that this topic pertains if there should be arbitrary restriction to malware sharing and that you started from an indirect reply like

Then you continued with

I wonder if you think that a non discriminatory requirement can endorse a selection criteria that measure detection rates over a sampleset that can possibly include nowhere-to-be-found samples.

I wonder if you think that every and each 12 months old sample can be surely found in the wild.

I wonder if you think that malware gathered by user submitted samples is not influenced by cumulative market-share of partnering AV brands.

I wonder how long you think the vast majority of websites that spread malware will last online.

I wonder if you think a malware collection infrastructure of a group of companies that are binded by a private agreement partneships can be really compared to any new player that is possibly excluded from such private sharing agreements.

I wonder if you think that new AV brands can possibly gather all samples that were available before the development of their AV engine even started and if anyone could be entitled to not disclose samples leveraging on some “minimum requirement” argument.

I guess it depend on whatever those arbitrary criterias to restrict sample sharing really are.

Even if I still would like to know if there is an AV tester that thorougfully disclose his/her selection criterias and minimum requirement testing methodology to let everyone understand the restriction imposed on malware sharing and let everyone decide if or if not such restrictions are discriminatory I still wonder what people would think if a pharmaceutical company could not get a sample to research and develop a new vaccine because it has to prove, for example, how many other vaccines it has already developed (and what would be the minimum number of developed vaccines required).

Once again IMHO the whole point is still if sample sharing should be regulated by private agreements whereas biological viruses are treated in a different way for obvious reasons.

I was simply pointing out that detection rates are not based solely on a vendor’s ability to gather malware, as you claimed. It may be the cause for low detection rates, it may be part of the cause for low detection rates, or it may not be a cause at all.

I think it can. The criteria were created to prevent abuse, not to discriminate against specific vendors. I imagine the restrictions were put in place to ensure that vendors who participate in the sharing do actually have competent virus labs, and that the testers themselves do not become virus collectors for vendors who have no ability to do so themselves. And once that very non-discriminatory criteria is established, there seems to be no further conditions barring a vendor from receiving samples from the tester - short of concerns about the vendor’s ethics, of course.

Regarding your “nowhere-to-be-found” samples: if they were really nowhere to be found, they wouldn’t have ended up in the tester’s sample set in the first place.

I don’t have hardcore statistics that are 100% verifiable, if that’s what you’re asking for. I do think, however, that isn’t a problem for a vendor unless it is less than 12 months old.

No, I don’t, but that wasn’t the point I was trying to make. A vendor fully detecting the samples submitted by its user base doesn’t necessarily make its product’s detection rate reflective of the overall malware population, but if it can fully protect its users there would be little need for it to bother with what samples are inside testing organizations’ sample sets at all.

Comodo insists on picking up every unknown file from its users’ systems, in addition to manually submitted samples. But even among user submitted samples (at least from me) I see an average of 30% detection after several weeks. Hence my claim that a company can have some well-polished collection mechanisms and still fail at detection.

In the era of fast-flux domains? Not very long, but definitely long enough to infect users. And certainly long enough for those malware to end up inside the collections of testers and antivirus vendors alike. Come on, now. They don’t vanish instantly. How do you think anyone got those samples at all? Time machines?

As I’ve already mentioned, Comodo is not new at all. Secondly, sharing criteria exist only between testers and vendors, as defined by the testers themselves - let’s not confuse and lump this together with between vendors themselves. Researchers share samples among themselves with their colleagues from other companies if considered trustworthy; I believe I’ve discussed this with you at length before.

Once again you make it sound as though there’s an insider’s clique of corporate bigwigs among the “big boys” who scheme and conspire to decide who gets samples. On the contrary, it’s the tester who establishes a public baseline that even the so-called “big boys” need to toe in order to receive samples from the tester. There’s absolutely nothing hush-hush and backstage about it.

For some reason I’d always thought it was 80% detection for AV-C. I’m not sure where I got that impression from, though, because now that I go back and look through the methodology outline, it’s not stated in there.

That wasn’t the point I was trying to make. Despite all this statements about legitimate and not discriminatory selection criteria this only means that not all kind of restrictions can simply motivated by detection rate tests.

I feel such considerations more appropriate if such tests were only meant to score detection rates and not to to establish if an AV brand is eligible to receive samples (always provided that a private property can be denied for whatsoever arbitrary reason, if no reason at all).

I would imagine any detection rate test to not be able to tell the difference between AV vendors whose malware collection infrastructure is influenced by malware sharing partnerships and other AV vendors.

Thus while it could be useful to score how much an AV protects against known samples IMHO it doesn’t tell much about how competent a specific brand AV lab is if it is not possible to exclude any bias from possibly existing cross-vendors partneships.

It look like you are sure that each vendor gather samples indipendently without whatsoever private agreement sharing aid and each direct(eg own userbase submitted samples) or indirect contribution (eg partners userbase submitted samples) collectively amount to an irrelevant part (I wonder how much amount can still be considered irrelevant).

There is no need to cite any specified AV tester either but I would like to know what you consider a non discriminatory selection criteria for sample disclosure along with a description how it could be possible to really measure how much competent a single virus labs is alone without relying on any from of partnership.

You can then leave it to other readers the effort to verify if any tester or vendor does meet your suggested criterias.

Such clean-cut logic doesn’t address the fact that not all malware can be found for an extended timeframe.

Apart from recurring treats, that could be also years old I have yet to confirm how many samples in any AV testbeds were available at least for a week.

I have yet to understand then if an AV company that fails to gather a sample in a week (or any meaningful timeframe) doesn’t qualify as trustworthy or got bad virus labs.

AFAIK AV testbeds are not designed to test that and yet I wonder if anyone could possibly use them to score trustworthiness or virus labs competency.

Again I wonder assuming that most malware is regionally targeted if Cross-AV partnerships could prove useful to increase malware gathering geographical coverage and I wonder, if malware gathered by user submitted samples is influenced by cumulative market-share of all partnering AV brands, how much this does add to each single AV vendor competent virus lab.

That’s a circular reference besides I guess you consider private agreement malware sharing being motivated by trustworthiness alone whereas business logic also includes other restrictions like for example if existing partnerships already fulfill an AV brand needs.

What about vendors involved in private agreement partnerships? The more vendors involved in sharing partnerships the more the collective sampleset is likely reflect the overall malware population.

I would be highly interested to know if there is any AV tester that actually measure how much time each AV brand does it need to issue a signature for all samples sent after a comparative (in case he/her do share samples of course) besides I see you wish to peruse signature creation speed and neglect the malware gathering aspect as irrelevant.

Yep not very long. I cannot possibly know how long a malware site will last either but again I wonder what will happen once an AV vendor get a sample and how private agreement partnerships affect the subsequent steps.

I wonder if as long the sample exists and some AV vendor pass a test it doesn’t really matter what happened in between. Numbers tell the truth I guess and sure they do about detection rates.

If it is really all that matters.

Once again IMHO the whole point is still if sample sharing should be regulated by private agreements whereas biological viruses are treated in a different way for obvious reasons.

I see even more from you arguments that such malware disclosure practices are so bound to the current system that it looks almost no one is left to question it.

I still consider signature creation speed tests to be useful as a possible way to score AV vendors as an alternative to absolute detection rate tests in a totally different AV ecosystem where malware gathering is not such a limiting factor with so many unclear aspects.

Again I still wonder what people would think if a pharmaceutical company could not get a sample to research and develop a new vaccine because it has to prove, for example, how many other vaccines it has already developed. But this will never happen I guess, no way it will be endorsed the same clique that affects the AV ecosystem.

After all I guess computer viruses are treated as a second rate threat whereas their biological siblings evoke totally different considerations.

Every year, the World Health Organization predicts which strains of the virus are most likely to be circulating in the next year, allowing pharmaceutical companies to develop vaccines that will provide the best immunity against these strains.

As I’ve mentioned, you’re getting confused between a vendor not being eligible to receive samples because the tester feels that the samples are his own private property (they’re not), and because the tester feels that he’s not an employee for that vendor who collects samples for them due to their own inability to do so. If you believe your misconception is actually true, can you explain to us why testers would hand over their “private property” simply based on how much of it vendors can detect? It makes no sense at all.

Not entirely, no. But it does tell the tester which vendors are simply trying to leech samples off him.

That is actually the case that happens with most competent vendors, yes. One of the metrics of a good product happens to be the ability to protect its users from malware before said users run into said malware. It doesn’t matter if a product can add and release detection within minutes after receiving user submisssions. If it consistently detects only a poor percentage of malware before that, it’s still a bad product. Aside from some dedicated volunteers, user submitted samples among good products are typically insignificant compared to the number the vendor itself gathers from other sources.

But as I said above, not that this is of much relevance to the tester. The tester is simply interested in ensuring that vendors have their own means of collecting samples other than leeching off him. Which I think is quite a valid concern.

Assuming a competent vendor, this isn’t an issue at all, since the vendor in question (Comodo) is older by far than most of the samples used in most reputable tests. Assuming an incompetent vendor, I think the incompetence is the issue here instead of the age and timeframe of availability of the samples.

In an age where zero-day protection is strived for, I think that the inability to obtain the sample after more than a week - let alone to add and release detection - should be the exception rather than the rule. And if a vendor seems to have a tendency to exhibit this failure not just on the rare occasion, but repeatedly over an extended period of time, I think that’s a fairly good indicator of untrustworthiness and/or bad infrastructure on their part. Don’t you?

Your logic would make sense if it was the sales department and management that regulated the sharing of samples and signed the relevant contracts. Sample sharing among researchers (whether they work for vendors or are independent) is often done unofficially, often with no commercial gain for themselves and no specifically dedicated infrastructure set up to facilitate this exchange. Simply because this link is off the top of my head, ESET’s ThreatBlog provides a brief glimpse of the nature of this sharing: http://www.eset.com/threat-center/blog/?p=158

But let’s assume it’s a commercial exchange for now. If this was so, then Comodo’s position becomes even easier, as they can simply walk in and ask to buy from others without being hindered by their reputation.

It is irrelevant in this case because Comodo already have the samples delivered right to them. All they need to do is process the samples. Again, this is to prove the point that the popular misconception that vendor has sample = nothing else matters is false.

I’m simply explaining the status quo because you’ve provided no solid arguments that things should be any different. “Disclosure” practises? Once again, you make it sound as though a select few entities control who gets which samples. It’s simply not possible to exert such control over the industry, when even amateurs like myself have no problems with collecting more malware than I can handle. And until you can stop making this fallacy the crux of your arguments, I don’t think we’ll get anywhere, simply because we’re spending all our time just trying to get you to base your points on facts instead of popular myth.

Again I ask to you If you know about a specific AV tester who does this and describe his selection criterias and the minimal requirements testing procedures that could be possibly used to restrict sample sharing I will gladly add it to my favourites. I still would like to know if there is an AV tester that thoroughfully disclose his/her selection criteria and minimum requirement testing methodology to let everyone understand the restriction imposed on malware sharing and let everyone decide if or if not such restrictions are discriminatory.

With all possible imaginable selection criterias I see you are eager to call me confused, neglect IMHO important aspects as misconceptions and simply state that.
I see you carefully evading to try to describe a non discriminatory selection criteria leaving to the readers the effort to verify if any tester or vendor do use your suggested criterias.

How reassuring, but you completely omitted to state how much such irrelevant contribution amounts.

IMHO an overstated concern at least from you presentation. It still comes to mind the simple truth that pharmaceutical companies compete on research and development and not on sample (biological virus) disclosure. I don’t see any physician thinking in terms of ‘leeching’ either.

IMHO brands who got more partnerships gain an advantage. Nowhere I read an extimate how much great this advantage will be. Besides the entire system looks autoreferential to me expecially considering your whole presentation.

Yep it looks private agreements again. I’m interested to read a thorough description of such vetting procedures. I will refrain to post additional considerations and invite everyone to read that “Ethical considerations” part.

I don’t remember stating so. I do remembers stating that having a sample it is only the first step to research and build an appropriate countermeasure being it an a AV signature that only work on a specific patented AV engine, a removal application, a patented heuristic detection engine or a patented HIPS technology.

I feel the explanation extremely lacking. Indeed you provided some explanation but you missed the whole point.

Once again IMHO the whole point is still if sample sharing should be regulated by private agreements whereas biological viruses are treated in a different way for obvious reasons.

I still wonder what people would think if a pharmaceutical company could prevented to “leech” a virus/ bacteria sample to research and develop a new vaccine because it has to prove, for example, how many other vaccines it has already developed (and what would be the minimum number of developed vaccines required).

What is an established practice in the current AV ecosystem compared to the biological counterpart and furthermore your explanations confirmed my concern that computer viruses are treated as a second rate threat whereas their biological siblings evoke totally different considerations.

Please, this isn’t going to turn ugly, is it?

If you deem I offended solcroft in any way please send me a PM and let me know more about it. I will try my best to cope with your concerns.

I 'm sorry about any misbehavior on my part.

I do not think you have offended him, Gibran. I am just thinking of the previous discussion with Solcroft that led to a ‘battering’ if you will against this person.
I enjoy a great debate/conversation as much as reading the discussion between you two. It is very informative (thank you). I feel humbled by the depth and length of your debate.
No disrespect was intended if that is how you took this.
I am simply making a comment.
I look forward to reading further your discussions.

AV-Comparatives provides a methodology listing of its procedures on its website, specifically here: http://www.av-comparatives.org/seiten/ergebnisse/methodology.pdf

AV-C and AV-Test are the only tests whose results I personally consider to be reliable right now, other than my own, of course. :wink: Unfortunately, Andreas Marx of AV-Test doesn’t seem to make his organization’s test methodology readily available, but from what I’ve seen he does respond to email queries from the public. Perhaps you could contact him and find out?

I believe you’re confused simply because you cannot seem to recognize the fact that testers not sharing samples doesn’t necessarily mean they believe their samples are “private property” that belong to them. I was a participant in an online conversion with IBK of AV-C some time ago, where we were told in response to a question that the criterias were set in place to prevent indiscriminate sharing of samples with non-trustworthy vendors. Even though the exact percentage to qualify to receive samples from AV-C doesn’t seem to be publicly available anymore, I’ve seen no reason so far to doubt IBK’s claim, as his methodology appears to be in line with his stated aims.

Of course, I may be wrong, and you know perfectly well what you’re talking about. But so far I’ve seen no evidence to back up your claims, either from you or anyone else. If you have proof that testers actually deliberately discriminate against vendors they don’t like and deliberately withhold samples from them even though they meet the minimum criteria, please do share.

I’m not a biologist, and luckily for me this thread concerns computer viruses and not biological ones. To return to the topic at hand: in what way exactly do you believe that a tester not wanting to become a collector for vendors who don’t have their own facilities is an unreasonable concern?

I can offer a few educated guesses to your question, though. Pharmaceutical companies require to be approved by the government. I don’t imagine that there would be problems sharing biological virus samples among the pharmaceutical industry, as one is assured that everyone is certified to government and/or international standards. Not so with the antivirus industry. If it wants to maintain a semblance of professionalism and integrity, self-regulating standards are necessary, especially for an industry based so heavily on trust.

There is an advantage, yes. But last I checked testers do not demand perfect performance from vendors before sharing samples with them. There would be nothing to share anyway, if that were the case. There is simply a minimum baseline to be achieved. I think that the ability to obtain samples within one week is a very reasonable minimum baseline indeed, given that the aim should be zero-day protection and one week leaves a VERY big margin of error. Too big in my opinion, in fact.

What private agreements, exactly? You display a disturbing consistency of NOT providing any explanations at all - let alone evidence - behind your repeated insinuations.

“Eg using a minimum detection rate as a selection criteria means that the sampleset composition only evaluate malware gathering abilities…”

Does that refresh your memory? :slight_smile:

I realize you were looking for an explanation of why sample sharing should be regulated by private agreements, but a little reading of what I’ve said will reveal to you that my explanation was that there is no such thing “regulations” and “private agreements”. They have no way of regulating or restricting me as an amateur, and they have no way of doing the same to large international corporations in the industry. Samples are available regardless of whether testers want to share them. They can and will be obtained if one wants to do so, and there is nothing anyone can do about it short of banning the internet. Again, until you can stop using this fallacy as the crux of your arguments, we’ll simply go around in circles.

I read that paper (august 2008 revision dated 15/09/2008) and found it difficult to understand.

Is detection rate used as a selection criteria to deem a vendor eligible to possibly receive samples?
If so can you explain what sampleset it is used in that case?

The full one?
The one who includes only malware not older than one year?

Does that paper explicitely state what is the minumum detection criteria?

I see. In that regard I’m surely confused. Even though it doesn’t necessarily mean that, I feel concerned in that regard and I believe such aspects should be thoroughfully and publicly documented. Sure it will not be something diffiult to do.

Then you surely asked what sampleset is used for that type of vetting procedure.

Still I would like to know what you consider a non discriminatory selection criteria leaving to the readers the effort to verify if any tester or vendor do use your suggested criteria.

Do I have a possible way to verify that? Does this mean any AV tester who could possibly deny sample disclosure provide a way to verify eligibility criteria?
Eg in case a detection rate test is used would it be reasonable to assume that a list of CRC hashes of missed samples would be provided?
This would at least allow a rejected vendor to know if they possibly were able to gather an undetected sample at a later time.

EDIT: I just noticed that the above quote regarded deliberately discriminatory criteria. This does not mean I endorse that description and I wish to make my excuses for carelessy replying to that without explicitly clarifying this point. I also feel the term “subjective”, “biased” or “flawed” to be a fitting substitute for all the times I borrowed the term “discriminatory” in my replies.

IMHO tester should be concerned how to carry her/his test correctly. The mere fact an AV tester can use a rare sample to test any AV is not relevant too provided they follow their publicly available methodology. I’m sure we will likely unable to agree on which can be or cannot be inferred by such tests in the context of the stated methodology.
This is not enough IMHO to neglect as irrelevant the difference with pharmaceutical companies. You can feel free to do so though.

Although I will wonder if you think that a pharmaceutical company could be legitimately prevented to gather a sample to develop a vaccine and if such case could be considered ‘leeching’.

I’m more likely to guess that vendors with many partnerships, well established marketshare and who possibly started to develop their AV more than five years ago maybe will have almost nothing to get from a whatsoever AV tester. I guess the a tester could possibly ‘leech’ some samples instead. I would be surely interested to read any documentation about these aspects.

I used one week as example in first instance, though I wondered if it was reasonable. Thanks for providing such answer. This partly solve the nowhere-to-be found paradox you previously described and provide reader a more specific context to a previous reply of mine.
I don’t have any specific expectation and I would be more inclined to consider reasonable the average result of specific tests designed to measure that in a representative sample of AV brands.
Provided that the bias caused by cross-vendor sharing partnerships would be removed. Even though I’m interested to know if such tests are available.

Oh my! Is private agreement an insinuation? The private nature of an agreement doesn’t surely mean that such agreement will not publicly announced. Anyway IMHO I guess Public utility may be a better alternative to the current AV ecosystem and I hope no one will consider that disturbing.

As I stated that having a sample it is only the first step to research and build an appropriate countermeasure being it an a AV signature that only work on a specific patented AV engine, a removal application, a patented heuristic detection engine or a patented HIPS technology in my first reply to your post I still wonder how could you infer this

using that refreshing reference posted way later than my first reply.

What does specific circumstances under tightly-controlled conditions with vetted individuals mean?
Yes there is no regulation like for Public Utilities.
Sharing is currently carried on a per case basis with individual agreements between private parties.

Yes everyone can privately gather maleware samples. They can also privately choose to share them or not.

I guess everyone could read all your rearmks and then decide whenever my viewpoint was based on a fallacy or not. I did not assume you considered it otherwise and I thank you for your efforts to describe your viewpoint.

Keep in mind that my answers here are based on how I personally understand them to be, and do not necessarily reflect what really happens at AV-C (i.e. I may be mistaken):

Detection rate is used as a selection criteria to determine which vendors can participate in the test. Since the participants will receive the samples their product misses during testing, by extension it can be said that detection rate is also a criteria used to determine which vendor receives samples.

No to both questions, unfortunately. The distinction between Set A and Set B was introduced only in the last comparatives (results released last month) and was not present in any comparatives before that. For the second question, if memory serves me correctly, the threshold is 80%. This seems to have been changed, though, as it is no longer mentioned in the methodology report.

Most of it is publicly documented. For the missing details that you deem to be important, I suggest it would be prudent to verify them by contacting the testers in question, before subscribing wholesale to Melih’s one-sided propaganda.

I personally feel there’s nothing discriminatory about the current practises. Do you feel that there are any aspects that are biased?

Unfortunately the burden falls upon the maker of the claim (i.e. you) to prove that. If you have no way to conclusively demonstrate that your claims are true and that testers deliberately deny vendors entry even through the minimum standards are reached, then I’m afraid that’s that.

Unless you’re trying to insinuate that they don’t, your little snippet is quite irrelevant to the discussion at hand.

Let’s assume that, like antivirus vendors, pharmeceutical companies aren’t subject to government certification and are not assured to adhere to a certain standard of ethics and professionalism. A company with a poor standing and reputation demands supplies of anthrax and ebola from other companies with established track records. Would you approve of the fact that lethal toxins are being handed out in broad daylight to anyone that demands them, without any form of control whatsoever? Would you support your lawmakers if they pushed for such legislation?

I think it’d be insane, and I’d start to seriously consider applying for citizenship in another country far far away. But to each his own.

While that happens as well, even the best-scoring vendors typically miss ~10k samples if you observe the numbers in the results. But as I said previously, it’s a two-way process; vendors receive samples from testers, and vice versa.

If you are willing to clarify what YOU mean about “private agreement”, then we could inspect whether you’re trying to insinuate. There’s a lot of things that could be inferred from your use of the term. For example, people might believe that beneath-the-table commercial profits or personal gain are involved, as Melih tries to imply.

It’s not really a big wonder when you said those words yourself. But if we can agree that that’s a faulty viewpoint, then there shouldn’t be any further issues.

Perhaps it means exactly what it says. The vendor providing the samples (ESET in this case) wishes to verify that the samples will be carefully and professionally handled, and that the receiving party is of trustworthiness and integrity, before agreeing to share samples. I think those conditions are both fair and necessary. Don’t you?

I wish to clarify that there are no regulations and private agreements on who can possess which samples. While a tester, vendor, or an organization can choose to not share samples with another party not within their circle of trust, they are absolutely powerless to prevent that party from obtaining those samples via other means, of which there are many.

Exactly. Everyone can gather malware samples. There are no restrictions or agreements or what-have-you preventing anyone from gathering malware. It therefore strikes me as odd that Melih seems to be trying to portray Comodo as being at the mercy of testing organizations; I certainly hope it isn’t.

I asked if there is an AV tester that thoroughfully disclose his/her selection criteria and minimum requirement testing methodology to let everyone understand the restriction imposed on malware sharing and let everyone decide if or if not such restrictions are discriminatory. Let’s leave it at that. This topic doesn’t pertain a specific tester either.

I have to trust that. Anyway that is what I call an agreement between private parties.
On the other hand I would have preferred to read about ESET vetting procedures and related vetting methodologies.

It is funny to challenge a misintepreted statement with an insinuation but I guess if “individual agreement between private parties” at the end of my previous reply do not clarify this point, people will read your claim and agree with it.

I’m still concerned. Besides the whole point IMHO is still if sample sharing should be regulated by agreements between private parties whereas biological viruses are treated in a different way for obvious reasons.

EDIT: I just noticed I carelessy replied to the above statement. This does not mean I endorse that description and I wish to make my excuses for not explicitly clarifying this point.

I beg to differ. Like methodologies are published and thoroughfully documented so should be vetting procedures.

To be more precise I guess I should state that I was not able to find a public and thorough description of such vetting procedures.

Once a methodology is published it is also possible to know how the end result (the test or vetting procedure) should be regarded.
This is the reason methodological paper are released along tests.

Like mine my concerns about possibly flawed vetting criteria IMHO your statement about a direct relation about simple tests and ‘leeching’ or trustworthiness is a claim speculation. It should be obvious with all that we both posted that anyone could get an idea about mine and your statements and decide by themselves.

Thanks for the clarification. I would have expected you to cite a minimal number of developed vaccines as a proof of that company standing. My mistake.

Thanks for the clarification. This info is unsettling for me anyway there is nothing more I could add in that regard without triggering a recursive endless discussion.

Provided that PGP “Web of Trust” is a self referential verification mechanism I have to guess that what you meant with circle of trust doesn’t imply that.

Anyway I prefer more an approach like Public Utilities as malware fighting IMHO should be regarded as a public service this is why I quoted

Every year, the World Health Organization predicts which strains of the virus are most likely to be circulating in the next year, allowing pharmaceutical companies to develop vaccines that will provide the best immunity against these strains.

so many times.

I just noticed that a quote I replied in one of my previous posts regarded deliberately discriminatory criteria. This does not mean I endorse that description and I wish to make my excuses for carelessy replying to that without explicitly clarifying this point. I also feel the term “subjective”, “biased” or “flawed” to be a fitting substitute for all the times I borrowed the term “discriminatory” in my replies.

I just noticed I carelessy replied to the above quote. This does not mean I endorse that description and I wish to make my excuses for carelessy replying to that without explicitly clarifying this point.

I like to think that it’s a dynamic procedure in most cases. Elements of human judgment are invariably involved when we try to decide if other people are trustworthy. Complete reliance on a specific list of outlined steps and check boxes to tick off would on the other hand sound to me like an exceedingly flawed method to achieve this.

Of course you are. But that wasn’t my question. What gives you cause for this concern, exactly?

I do not know if sharing of biological viruses are bound by private agreements or not, so I cannot challenge you on that claim. Nonetheless, I would imagine it must be bound by regulations, in some form or other, in any civilized country. Those regulations might be private, public, and/or government-enforced via legislation. I believe you are very much mistaken in your claim that trade of biological viruses are unrestricted – at least, I hope you are.

Whether a vendor is approved or not to participate in the test is not public domain. Whether a vendor even applied to participate in the test is pretty much unknown save to the vendor itself and the tester. Professional testers have an interest to ensure public trust and confidence among their test results published for public consumption, but which vendors applied for participation, and succeeded/failed is not public domain, and something that concerns only the vendor and the tester. If the vendor is satisfied by the response provided by the tester that their product does not meet the minimum criteria, then so am I.

I prefer to hold a more pragmatic and realistic view of the situation, and am personally not too concerned with details that are irrelevant to the overall picture. I do not see how detection rate – a simple performance parameter that is measurable with flat numbers – can be a discriminatory criteria. Your concern so far, as I understand it, is that you are not privy to every single intimate detail about the tests, and hence something must be wrong. If that is what’s worrying you, it would be prudent to seek clarification from the testers instead of succumbing to FUD, as I have suggested. From what I’ve seen they (the testers) are fairly receptive to public correspondence.