Questioning the Economics of Security Testing, Detection and Protection

First and foremost, I would like to defend first why this thread is posted in the General Discussion section of the Comodo Forum. This post deals on the subject of the consumption of goods and services; in this case, that of security testing, and thus involves the discussion of the methodologies employed, target audience, and to a certain degree, the ethics involved in what I would refer to as a business. This thread does not, in any direct way, make any attempt to criticize, analyze, defend, explore, discuss antiviruses, internet security suites, security software and internet security in general, but the economics of security testing only. This post does not intend to criticize, insult or any other effect of the like to any entity. Any such effects that may be derived from this post are unintentional. It is, thus, that the author encourages the entity that has taken insult or otherwise recognize an error on the part of the author that the entity notify the author either through commenting or PM’s. Commentaries are welcomed and shall be deemed constructive if the criteria fits.

A Definition of Terms:
Questioning – A questioning implies the intellectual pursuit of an ideology or concept through a series of inquiries pertaining to the problems posited by the ideology or concept. Though the method attempts to be objective in every case as possible, there is no validation to claims made save for that of human reason from which it derives. It does not hold that the statements made are universally true and is entirely up to the reader to decide on the matter of its truthfulness.

Economics – Formally defined as the description and analysis of the production, distribution, and consumption of goods and services. The thread intends to discuss on the nature of the matter as a good/service and deals therefore on how this good is produced and distributed and in effect, to identify the target audience and for what purpose.

Detection – Detection refers to the ability and general capacity of an antivirus to identify a malicious software based on a specific methodology applied.

Protection – Protection is an umbrella term, so to speak, which refers to the general capacity of a product to identify, block and remove a threat. In recent discussions, however, it has also been referred to as the capacity of the product to prevent infection and any potential subsequent damage from such infection in the interest of maintaining system stability and usability, privacy and data security.

Testing – Testing is a method or series of methods employed and used in general reference to any form of examination of a product or service in relevance to its purpose and qualification, particularly its effectivity to render such service(s).

System – System is properly defined as a group of interacting, interrelated, or interdependent elements of a specific structure functioning as a whole to attain some specific purpose or goal. Thus, “system” in this thread refers to the whole of the computer, hardware and software, functioning interdependently of each other to form the whole computer.

Average User – There can be no absolute definition or description of who the average user is, primarily because of the subjectivity of what is average. With the absence of statistics and formal testing methodologies (because of the impracticality of such tests), there can be no conclusive definition of what the average user is other than the popular belief shared by security professionals and enthusiasts, that is, a user oblivious to the growing threats in the internet. The average user is described as a person who uses, but does not recognize the threat in the internet, nor is one capable of and possessing the means and know-how of identifying threats other than antiviruses. Malware, however, has been prevalent and has come to the attention of the average users. The average user now then recognizes the threat, but does not possess the proper training to identify and respond to the threats much like how a normal person cannot recognize and respond to bombs.

Introduction

About several months ago, a discussion of the methodologies used in testing antiviruses has been put to debate as to whether or not detection (rates) based tests can be considered as a legitimate form of testing in that “Does it show the actual usefulness of a product in an actual setting?” The difficulty in answering the question lies primarily on what detection is perceived as, particularly that of an average user. Because the term is as vague as the term “average user” to which it is ascribed to, there is no conclusive evidence of either side of the debate. The questions the author would like impose on this thread are the following:

  1. Is testing merely for detection rates obsolete? If so, how can we properly measure detection and protection?
  2. Who is the average user and how is he being characterized?
  3. For what purpose are security testing provided? Who are the target audience of these tests? Is it practical to employ such tests?

Analysis

Since the Industrialization period, technology has continuously improved and information became more and more widespread. This was furthered by the Enlightenment period and complimented by the humanist movements in the early 90’s. With the rise of technology, there has been great importance and weight placed on information. Even before the Industrialization period, man has recognized the importance of information, utilizing every aspect of data in memory to improve human condition or turn the tides of war. With this recognition began the drawing of the blueprint of one of (if not) the greatest inventions of man in its quest for information and knowledge – the Internet.

The internet began the surge of information throughout the world and revolutionized many aspects of human condition, particularly communication. Through the internet, the influx of information has become nearly unstoppable and has become available to the masses. With this in mind, we ask again, is the description of the average user still fitting to users in today’s context?

With the growing industries and market share in the internet comes the problem of crimes. Crimes are generally motivated by gaining a certain material, usually money. The internet, due to the flexibility of the technology, allows for various methods and means to achieve such ends. Hence, it is not unlikely, and is clearly observable, that the internet has also allowed crimes to progress and evolve. The question really now is “Are the average users evolving at the same pace as the crimes?”, the answer to which will tell us if the popular belief of the innocent average user is still applicable. The answer is no. Technology is evolving much faster than the average user can handle, which explains the difficulty in selling radical technologies. As in real life, malicious programs are capable of posing as legitimate programs, even completely imitating one (especially now that it is bound only to the malware author’s knowledge of software coding). Because of this, antiviruses and security programs remain to be the most practical solution to cyberthreats.

Security programs require skill technical knowledge to be produced. It is a profession, a service from which businesses form foundations on. Since the digital world derives itself from the physical (its creators being from the physical), the system it is structured on is a quaint imitation of that existing in the physical and is dominated by capitalism. It comes to no surprise that the services rendered are with a fee. Of course, one would argue that there are existing free alternatives, but in terms of cybersecurity, free alternatives are similar to “free taste” or those services rendered by politicians: advertisements. Free programs are unable to continue without the support of advertisements and the free alternatives are more often than not, advertisements devised in the intention of gaining market share. Cybersecurity is as much of a business in the digital world as security services employed in the physical world. For this reason, are testings conducted.

Tests are often conducted to determine the viability of an option. In the business world, tests are determining factors of which is the better product and the one to gain more market share, in other words, a method of advertising. Relating this now to cybersecurity, tests employed are in the intention of determining the better option for consumers. The question now then is “Are these tests credible, in particular, detection tests?”

Detection, of course, is a crucial factor in security. To respond effectively, one must first recognize what threats are to be blocked. However, faced by the increasing pace of malware evolution and production, signature based detection is feared to becoming fast obsolete, but is there basis in this fear? Yes, the basis is clear: that millions of zero-day malware are being produced by the second. However, I hold that signature based detection is still not obsolete. The solution is not to remove signature based detection since this form of protection is one that has been held to have been consistently the most effective and the most practical throughout the years. As an example, imagine Zero-day virus A which is a DoS attack. It cuts off internet usage to prevent updates so signature based detections are rendered useless. All the while, it also employs usage of safe actions in order to prevent detection via behavior blockers (i.e. creating shortcuts). Are signature based detections obsolete? No, not entirely. Because signature based engines are also capable of updating offline via downloads. And from there on, be able to detect and remove the infection. So no, signature based antiviruses are yet to become obsolete.

One may argue, however, that all these could have been prevented through other methods like sandboxing. The problem with such techniques is their reliance on human intervention. Automatic sandboxing is helpful indeed, but some software refuse to run and install under a sandbox. If the target audience would be average users, this is a problem. It creates unwanted hassle and reflects poorly on the part of the author. If the author was after market share, then the target ought to be someone more informed. For the sake of argument, say that the user has been able to workaround this through the implemented whitelist, suppose that there are a number of programs still unavailable in the whitelist, and supposing that the user does not recognize this as a threat, then infection is not prevented and reflects again poorly on the part of the author. Since we have concluded that cybersecurity is a business industry itself, we may in part say that their target consumers are the average users, and thus, employing and advertising the most practical and popular method remains to be the ideal resolution to the problem. Since testing of security products is done in order to compare products, the most logical method of testing is testing whatever is common in all products: signature based engines. Since we have also concluded that testing in the business world are for advertising for the competitors, and for the consumption of average users, it is based only on testimony to which the consumers must decide whether or not they ought to believe this. In as much as commercials rely only on testimony, without explicitly stating the basis of claims, they remain to be effective and are taken as testimonies. Likewise, recommendations among friends and colleagues also present no conclusive basis and evidence on claims, but are based only on testimony. Even in the reading of scientific papers, consumers resolve to treat them as testimonies, taken as truthful without duly presenting concrete proofs and actual evidence (for which the perfect example would be the numerous accounts of success in the creation of vaccines for AIDS or the documentation of past “experiments” that never saw completion which at the time, caused stir among the scientific world) or reiteration of the experiments by their readers. Such is the nature of documented tests. It is a case of testimony, not personal experience. In the effort of widening our view, we collect these testimonies and assert them as truthful. Thus, detection tests is conclusively not obsolete.

On the matter of protection, there can be no measure to the degree of protection a product would provide. No tests can determine with absolute conviction the capacity of a product to protect a system given the astounding value of permutations and combinations of possible attack vectors and of course the risk of error and the unknowns make any attempt to measure protection invalid. One can only measure to a certain extent and objectivity methods of protection such as detection, isolation, and removal. Prevention, likewise, cannot be measured particularly because it is in the assumption of a possibility rather than a certainty. Prevention implies the attempt to secure data, recognizing the threat but not knowing the threat. It relies on sheer probability which, in this case, cannot be objective because of the unknowns and the constant dynamic nature of technological evolution. It differs strongly from prevention of diseases in that diseases have known vectors (e.g. infections through contact via animal bites, airbourne, or intake of matter), but even then, the probability of preventing an infection has no measure.

The question of how detection tests can be employed, however, remains unresolved. How does one properly execute detection tests? The answer is relatively simple. Using two sets of malware: one known, and another unknown in a number more or less suitable to the amount of daily encounters which may or may be around the hundreds or more (considering the prolific presence of ads in websites). There has been objections to this method, however, stating they are obsolete while some adamantly reasons they are not. Neither claims have solid foundations and one argument can be paired-off with another leaving no conclusive answer to the question. The author of this thread, however, would like to propose that in the assumption that the target audience is the average user, detection tests remain practical particularly because it remains to be the most popular and widely used method of responding to infections and thus, whether or not transparency is exhibited in the testing conducted, they are to be taken as testimonies and are yet to be verified through personal experience. No moral principle is violated in such conditions regardless of whether the testing presents itself as conclusive or not as had been pointed out earlier.

If the solution is to take out mediating testing parties, then it leaves for a greater security breach, that is, every product can now claim itself legitimate and effective which may cause widespread infections and distrust in security, all the while defeating the purpose of cybersecurity industries. From all these, the author of this post concludes with the statement that he finds detection tests still reliable and practical with the absence of a practical, reliable and valid testing method to measure protection and prevention, and validated documentation of tests provided are taken as testimonies to be validated by personal experience in the intention of catering to the security needs of average users as a business industry.

one big problem: Some Testing organisations pretend to be Independent, while they are getting paid. Also there is no standard to which they adhere to. So the data they provide cannot be trusted for 2 reasons.

1)They are paid by the AV provider to do the test
2)They don’t adhere to any standards nor does anybody check their work.

Also, we need more stats on, how many % of the malware that is not caught by “automatic sandboxing” to understand the “risk” on manual intervention.

I for one would love to have an independent testing org that complies with some standards. AMTSO was setup to achieve that, but testing organisations who were part of it, then got out of it.

It would be silly of any users to trust any results from testing organisations like AV-Comparatives who
1)get paid by AV vendors
2)and don’t adhere to any testing organisation standards nor are audited and claim to be independent.

Melih

I have never seen so huge and so well-thought post. :-TU
2512 words,
12959 signs (without blanks).

That’s rather beside the point under the presumption that it is a business. As such, it is a testimony to which the users decide on whether or not they are truthful, that is, they validate it on account of personal experience. Until such time, these independent testing organizations, as in commercials, are accounted for as testimonies in the same manner that we use videos showing tests (for example, AV Company A advertises their products by showing us a video of how well it performs). As such, consider it then, more or less, a testimony of users rather than that of experts, but are nevertheless, above average users if these organizations do not adhere to any standards or verification in the same manner that we treat commercials or any testing done as testimonies than speculating they are fraudulent.

As I have said before, both arguments coming from both sides have no solid foundation and one argument can be paired off with another. As a demonstration, it is likewise “silly” (for lack of a better term) for AV-Comparatives, considering that it would appear as a business of sorts to violate the trusts of those who participate in their tests, in other words, their “clients” for in such an event would cause distrust and a waste of funds on the part of the AV company to allocate funds on a single form of “advertisement”. Such a claim is the same as stating “We ought not to buy products who pay for commercials and advertisements because they show no truth in their claims.” It makes no difference whatsoever for regardless of standards or validation, the “average user” who we established is the target audience does not concern himself with such things and take these tests upon consumption as “testimonies to be validated by personal experience” consciously or not.

True enough, but as I have stated before, statistics for manual intervention and general usage for that matter under the presumption of the definition of an “average user” as stated in the first post, cannot be measured in any practical, logical, definitive and conclusive method. The room for error, as a standard for statistics, is already too large to begin with.

Remembering in mind that we are discussing on the business of cybersecurity, presumptions are taken based on how the “average user” is defined and characterized. As a security concern, we take it then that the average user follows the laws of physics, so to speak, and takes the shortest path to achieve the general aim. Hence, it is assumed that manual intervention, in such conditions and presumptions, pose a good probability of infection.

t would be silly of any users to trust any results from testing organisations like AV-Comparatives who 1)get paid by AV vendors 2)and don't adhere to any testing organisation standards nor are audited and claim to be independent.
I agree, otherwise it might as well be like a infomercial like the ones you see at night speaking of that, my favorite one is http://www.infomercial-hell.com/tajazzle/

If I may add, testimonies are given the benefit of doubt and trusted after verification by experience. We all follow the same pattern. Friends, commercials, labels, certifications, receipts, documents. All the same pattern.

The original post is so long-winded I had trouble working out whether spainach_12 is for or against testing of security products or is just trying to list every possible consideration associated with such testing.

It is impossible to compare one product against another because it takes too long to run tests against all known Malware, and the results would be out-of-date before the test was finished because of existing but unknown Malware and new Malware created during the test.

Most published tests compare detection of a subset of all known Malware because this is the only practical way to give a comparison within a reasonable time-frame. Comparing protection takes much longer because it is impossible to automate. The few published tests that do this always use a relatively small Malware sample (hundresds or thousands rather than millions). Either way none of these tests can prove which product is better. If you don’t believe this then pick any 2 security products and compare as many published results for them as you can find. You will almost certainly find that these results are inconsistent.

Ah. Forgive me, but it is the best that I can do to shorten it. It helps to isolate parts to determine the general overview.

Do remember that it is a post that discusses the economics of testing products and not its credibility or the ethics that govern it (though these were partially discussed to prove the point as issues and misconceptions that surround the concept).

Like you said there is no “credible” test that would show the absolute effectivity rate of products. Under the presumption, however, that we are dealing with economics than credibility, for purposes of practicality, these testing methods are not to be taken as wholly true but as testimonies.

Under the premise of economics:

Since we have concluded that cybersecurity is a business industry itself, we may in part say that their target consumers are the average users, and thus, employing and advertising the most practical and popular method remains to be the ideal resolution to the problem. Since testing of security products is done in order to compare products, the most logical method of testing is testing whatever is common in all products: signature based engines. Since we have also concluded that testing in the business world are for advertising for the competitors, and for the consumption of average users, it is based only on testimony to which the consumers must decide whether or not they ought to believe this... In the effort of widening our view, we collect these testimonies and assert them as truthful. Thus, detection tests are conclusively not obsolete.

I did assert the flaws of these tests.

No tests can determine with absolute conviction the capacity of a product to protect a system given the astounding value of permutations and combinations of possible attack vectors and of course the risk of error and the unknowns make any attempt to measure protection invalid.
The same applies for all other methods of testing, hence the absolutism in the first statement (No tests can determine...).

But again, to avoid digressing from the topic, we will view these tests in light of economics.

The author of this thread, however, would like to propose that in the assumption that the target audience is the average user, detection tests remain practical particularly because it remains to be the most popular and widely used method of responding to infections and thus, whether or not transparency is exhibited in the testing conducted, they are to be taken as testimonies and are yet to be verified through personal experience. No moral principle is violated in such conditions regardless of whether the testing presents itself as conclusive or not as had been pointed out earlier.

And thereby the conclusion that these tests (and I shall reiterate) under the view that it is a business, are not obsolete.

!ot!
Digressing from the view, however, it is indeed true that no test can be absolute in determining the capacity of the product. From your post, you address two issues that concern the tests.

  1. Reliability

I fully agree. However, this does not invalidate the tests since
(1)tests are taken as testimonies and this is asserted in the fields of both the sciences and the humanities and thus, all computations are factored in with room for error. Statistical testing theories always consider statistical errors as integral and a highly necessary factor in all computations. This in mind, we assert that they are not conclusive but are taken as testimonies to widen our view. (For this reason is the word theory conceived, as well.)
(2)series of tests show product reliability in terms of development. From this we can assert to some degree a reliable product/company and base results in choosing products from there. Not absolute, but practical and logical nonetheless.

  1. Sample size

Again, two points:
(1)the millions of malware are more or less, variants of existing ones prior to the initial, cutting down the number significantly.
(2)despite the number, most malware share certain characteristics for certain purposes, again, narrowing down the number.

If you were under the assumption that these tests were for the consumption of a select group of people, then
(1)an organization of a private group is better suited for this.
(2)the reports should be more technical and specific rather than general, following an agreed format.

The publication of the results to the general public (as methods of advertisements as in quotes like “Product of the year in ProductComparison”) contradicts the assumption. Now the question is, “For what purpose?”