Which Av would you choose?

You have got to learn that if you work hard enough dreams as well as wishes do come true. To me no one works as hard as Melih. He promises and he delivers. That is all I need to know.

PS: In this world there are people who criticize everything just for the sake of criticizing and bring moral down. However, there are also people who encourage others to do better and uplift moral as a result without judging. Guess in what category you are found RejZoR. You are a sad and depressing case, just to put it politely.

Peace, even onto you.

On thing is being depressive, another to be a realist.
Bunch of ppl say CIS detects 98%. 98% of what? Where is the proof everyone always want from me.
They haven’t submitted CIS to AV-Test or AV-Comparatives or even VB100%.

I don’t know who says that but he or she is obviously wrong. I suppose not even with CIMA heuristics (if they r good) it would reach that score ATM.

But I’m still waiting for the malware sample that bypasses D+, running with proactive config, or lets D+ kill Windows’ files.

CAVS detects 100% of the malware in it’s database :wink:

Anyway, AFAIK, it will be submitted to the places you suggested when CIMA heuristics are implemented. I’ll agree with you that CAVS is not the best AV in the world, but it certainly has the potential to be at the top. I’d rather use it than your precious avast, certainly.

:-TU

What good is great morale if your detection plain sucks? Morale doesn’t protect you, detection does.
And CIS lacks it greatly.

RejZor - let independent tests show CIS capabilities.

You don’t have prove that the detection sucks, except for FP’s and your so hated packer detection.
Others don’t have prove that CIS detects 98 %

Please tell me where?

I remember that darcjrt was doing some testing of CIS against his own malware samples. The last post he did was in reference to database 1062. His post was here:
https://forums.comodo.com/feedbackcommentsannouncementsnews_cis /detection_rate_comparatives_about_cis_and_3rd_party_final_av_products_by_darcjrt-t33392.315.html
reply #324. He was getting a detection rate of 94.86%. The link to his Excel spreadsheet data is: Comodo Forum
Some of his earlier database figures included comparisons of other AV products, but this one doesn’t have that. I don’t know where the 98% figure comes from. Matousec testing of CIS 3.5.55810.432 gave it a product score of 90%. Those are the only two percentage scores that I remember seeing.

CAV needs to be tested, but our casual tests didn’t show the power or the debility of the AV, so when thinking about percentage, lets think about Virus.GR , http://antivirus.about.com/ , http://www.av-comparatives.org/ , http://www.icsalabs.com/icsa/icsahome.php and Virus Bulletin :: Home .

Well, to answer the Jokers question, where does 98% detection rate come from. It was based on a personal test.I downloaded 500 samples of known Malware, from a well known forum, on a do it yourself basis. Cavs detected and removed 490 out of 500 samples making a 98% detection rate with high settings. I admit that this is not an av- comparitive type test or similar, and some may not take it seriously, but there was also a post in Comodos forum a couple of weeks ago when the data base was around 2.2 million and someone else also ran a personal test and achieved a result around 97% and i put down my result to the growth of the data base, as other Comodo forum testers have shown from earlier tests, as the data base grows the detection rates increased.Thats good eneough for me at the moment and look forward to the new Heuristics which should help increase the rate futher and reduce Fps.When Melih is ready and all the improvements/ Fixes are made, i am personally very confident Cavs will aquit itself very well detection rate wise, and be able to hold its head up high in the company of The big boys.

Regards
Dave1234.

Thank you for the support Dave, really appreciate it.

Actually your test is as good as avcomparitive ones. the only difference is amount of samples. And I can give you all 3M samples we have and you can test CAV against is and then you will see that we catch 100% :slight_smile: These tests people rely on are very subjective and has no relation to whats out there as term of malware cos noone has a 100% view of all the malware out there… What matters in tests is: whats out there in reality and how the user is being protected from it in real time! Collecting viruses left from 2001 and setting up av testing website and testing AVs against it will prove nothing apart from that you would have been good at 2001 and not how you would be protected today! I mean knock, knock to Av testing people… heard of confiker? your 99.99999999% detection meant s*** all didn’t! :). Its real protection that matters! Test that and stop misleading people! How many people who went with 98% detection AV then got Confiked thanks to your flawed and misleading AV tests which gives the wrong impression to people?!!

Start testing what matters: Real time Protection!

PS: Dave, sorry i used your post to respond to these silly Av testing ■■■■ that these so called av websites do…

Melih

If you think VB100%, AV-Test and AV-Comparatives are “silly” organisations, you’re doing it wrong from the start…

Plus they don’t use samples from 2001, they have enough of other junk thats released daily.
DOS malware is not even included anymore for a long time.

do not twist my words Rejzor

read what i wrote…

Melih

What is CIS doing? it is NOT an AV. You may not want to use the “best AV.” You want to use the best overall security scheme. This is a different question, unless you say “best AV” means the AV that fits in best with my other security components.

The old paradigm, which is not followed by security products now, was entirely a “housecleaning” model. The maid went through the house thoroughly removing dirt and stains, leaving only “clean” rooms. No lock on the door to prevent the dirt coming in. The maid who could recognize and cleanup dirt best was the best servant. The signature base was critical. If the cleanup was good and fast, it would be done before the dirt caused any harm.

As AV products developed, the “maid” became more of a “butler” who watched what was going on and fetched things for the master and mistress of the house. If the silver was not polished enough when the table was set, or the master of the house required to use a room, the butler would check it on the fly and order the servants to clean it then in “real time.” Of course the bultler would also watch how the other servants performed, and if one did not perform correctly, or was using dirty silver in the kitchen, he would catch that on the fly. Thus was born “heuristic analysis” and “behaviour blocking.” BOC was a novel innovation with slightly different focus, looking at the very last minute before use of an item, and making one last check as an item was “unpackaged” and raised for use in the masters hand. Still signature base was important, but it greatly simplified the task of catching dirt because the examination was at the key time and involved looking at a smaller comparison signature base.

Somewhere along the line the idea was developed to close windows and doors, and watch traffic, requiring “dirty shoes off” and no dirt allowed in! Real time AV began to merge with a firewall concept. Firewalls unfortunately are noisy and whenever anyone knocks on the door to enter, you either have to have a rule set for entry or check with the master and seek permission to enter.

Melih has used a similar seurity metaphor, door locks, burglar alarms, security sweeps, etc. to describe CIS pointing out that the job is so difficult to close windows and doors and watch all the traffic as it enters and leaves, that you will often fail to stop the dirt from getting in, fail to clean things up, or miss it when you check later. Even a great butler might miss things! So the idea of whitelisting was introduced, only allow known clean items to be used. This is a bit like keeping bright shiny new clean things in a pantry or cabinet and ONLY using those things. Seek approval from the master anytime anything else is used. So now we have the noisy and annoying result that many complain about.

CIS uses a layered approach. The key to its success is how well it combines all the methods, and whether that combination of technologies succeed best in the end.

Signature base, and cleanup skill are only part of the mix. So, the choice of AV is dependent on the other security components, not only how well they work but how well integrated the components are. NO current AV testing really tests end results of a combination or tests degree of integration.

I don’t see he did? Let CIS take part in AV Comparatives and we will see how good detection is. It’s not ■■■■, who reallys thinks that must be silly.

I think it’s a great way to see how your product fares against competition and what are its weak points.
If it fails in FP field, something has to be done there. If detection is not good enough, improve scan engine and add even more signatures. If program is crashing during scans, fix major bugs.
Comodo is young and many will forgive if it fails. But you at least know where it stands as it is.
If it’ll progress from STANDARD to ADVANCED and someday to ADVANCED+, that means it’s improving.
If it keeps on recieving ADVANCED on several tests, this means they are holding a certain level of quality thats not top notch but still very good. If it goes up and again down, it means they are not keeping up.
These tests tell far more than just raw percentage, you have to understand that.
Thats why i want to see CIS there with the rest in these tests.

perhaps you care to point to wording where i said: “silly” organisations" then?
My issue is the method employed and not with entities doing it. I am sure those people are nice people trying to earn a living.

So he did twist my words and moved it from a method to people and organisations.

Melih

Excellent description 00hmh (:CLP)

Is that why you seem reluctant to submit CIS for testing?
If the methodology these testing organizations use makes them silly, why do they use those methods and why do they have credibility?
Do you disagree with the methodology of most testing organizations performing these tests?
What can you do about changing the testing methods utilized by the organizations whose methods you disagree with?
Are there testing organizations that utilize methods that you actually endorse?
What difference does it make if you disagree with a testing organization’s methods, if that testing organization uses those methods uniformly for all products being tested?
Doesn’t testing give users and product developers benchmarks and desirable feedback about the product they are using?

Ah, so many questions and so few answers, but test avoidance has a tendency to do that!

When COMODO thinks in his AV publicity, what it will do?

Show results from The Joker’s malware collection? From dave1234’s malware collection? What else more?