Archive file scanning - and recursively

Hi. I was wondering about something practical that may not have been brought into the subject of archive AV scans; namely, the delay factor:
What if you downloaded an archive file that is infected, and you would have a ‘scanned’ file, ready for use: even when (and especially after) the real time scanner picks up the infection - in the file you are already counting on - it might take up a lot of time trying to find a new file to replace it?
And on a second thought, if, recursively, there are archive files within archive files, would it not be prudent to know that one of the components of the needed file, would have to be replaced, even though you may have extracted the files within the file, just to test it in real time?
I would like to see recursive archive scanning - at least for downloading files.
Thank you.

I’ve read this several times, and I’m still not sure what it is you want added.

You’re saying that recursively scanning archives will somehow save you time? I’m not sure how this is supposed to happen.

Scenario A: (Actual method of operation)
-I download a file.
-I run the installer. While unpacking, CIS’s real-time scanning engine discovers the nasty buried in the archive.
-I start looking for alternatives.

Scenario B: (Proposed method of operation)
-I download a file.
-CIS scans several layers into the archive and discovers the nasty buried in the archive.
-I start looking for alternatives.

Same thing really… Am I missing something?

Hi, again. I thought over what was said, and I came up with 2 points:
-1)
For Scenario A:
-‘Say, I download a lot of files.
-I don’t install one-or-more-of-them at the moment, because I am busy: there are other files to install: I have a ‘shopping’ list, and I’m selecting ‘purchases’.
-Tomorrow, I will create a ‘recipe’ for something, knowing that I won’t have to go back to that ‘store’; because, today, I ‘bought’ all the ‘ingredients’ that I would need from there… also, today, I have access to ‘shopping’; tomorrow I may not.
-I open one of the one-or-more ‘ingredients’, the next day, and lo’ and behold, some of them are unusable!

For Scenario B:
-I download a lot of files.
-While at the ‘store’, I am told that the ‘product’ which I was about to purchase, is damaged!
-I do not purchase this item; instead I continue shopping, and find another item: one which, I am told by an expert, is not damaged!
-I am now relieved, because:
-a) I will now not have to go shopping tomorrow as well;
-b) I may not even have available the same ‘lift’ I had had, today;
-c) tomorrow, there may be the deadline for this ‘recipe’; and it’s something I really need for work!

-2)
A lot of programs have archive files that remain compressed after the install.
So now I have a program that I think is usable - until the file is unpacked, at which point I have a problem:
a) I may need a new program;
b) Having used the program before, perhaps without this file being part of that particular session, I may be used to trusting that program; and not only do I need a new one: I may not even think to recognize which one the deleted-quarrantined-etc-file is from… and why said program will no longer operate properly… if I even realize that!

I know some of this (i.e. point ‘-1)’) may sound different; but there may be other people who also find this a problem: so I hope that this would be a convenience to be considered.

If the CIS option is set, the contents of archive files are scanned, but only on a manual scan.

It would be nice if the Windows API used by software to request a virus scan would include scanning inside archive files. Firefox uses this interface at the end of a download. This scan could either be controlled by the existing manual scan option or a new option. As the OP indicates it would be desirable to detect malware at the time of download rather then wait until a bad file is extracted. If the file being extracted is replacing an existing file the existing file is gone before the new file is scanned and found to be bad. A scan of the archive file would catch this before the bad file is extracted. I tend to always manually scan files I have downloaded before doing anything else with them because I know they might not have been scanned when they were downloaded.

Scheduled scans also scan archives.