An Interesting article from Larry Seltzer...

Here is an interesting article from Larry Seltzer who writes about Code signing. Please go ahead and tell us your views. We both want to see an improvement to the way the Code Signing is done and would love to hear more views/ideas.



I agree with Larry that codesigning does not imply safety.

I don’t understand if the validation part of code should be done by CA authorities. If something like this takes place I guess this should mean that each executable code cannot be signed directly by developers but it should undergo something like MS WHQL.
If this is going to be true I guess this will slow down the process of applications release.

Maybe CA can reserve the right to revoke the certificate of a developer if they find some malicious code during the life-cycle of their applications. This should provide a better outcome that an one–time scan. If a malicious developer is found such information should be disclosed to all CA using a fast and privileged channel.

But IMHO code signing should not bear any meaning other that authenticating the source of information.

Anyway there are informations that carry valuable meanings.

For example there could be different levels of code signing.
One of this level could certificate ISO 9001 compliance or something alike.
Looking at the certificate it would be nice to know how developers will handle QA and exploits.
This way users can know how many days will take to fix an exploit, or any other information/certification that could be indicative of a qualified organization.
Even a software with no hidden payload can be exploited ans there is also a chance that exploitable code could be written intentionally.

Another valuable information could be a behavioural fingerprint of the software.
Maybe such thing would be difficult to achieve but a metalanguage to describe how a software will behave could be interpreted by other security softwares in order to score the safety of an application or could be rendered in human readable format to let users peek in those software blackboxes and let them ■■■■■ the real identity of that application.
Such behavioural fingerprint metalanguage could be used to provide a standard to train HIPS softwares and to let users restrict/deny specific behaviours they don’t like before an app is launched.

Such metalanguage should have enough abstraction to provide a description that is much more informative than a list of APIs used by an app and provide enough information that could be used by a security software to enforce a secure behaviour. This way we could have some software working like a gatekeeper/watchdog looking for any misbehaviour/unexpected end-result.

I guess it would be nice to know if an app was at least designed to work only in its program folder,my documents and temp folder without spreading suspicious files on the HD or the ability to overwrite files with executable extensions.

IMHO one of the advantages of code signing is that once a malicious app is found it would be possible to optout all the applications from that developer.