GUI/Flow Help for creating easiest to use Central management for CPF

Hi Guys

Here is a new posting that we can work on with our suggestions. I would recommend we list what features we want in this first, then we can all suggest (via graphics if you wish) how you would see implementing this feature list. Then we all discuss and choose the best way forward.

is that good? anyone want to follow another process? pls suggest.
thanks
Melih

Hey there -

Thanks for the background info, panic. I feel like I’ve started reading the book from the 5th chapter! I’ll start with some feedback regarding your previous musings…

Yes, I guess that is where I was coming from. It has been my experience, however, that if a home has 3+ PCs connected together in a LAN, then chances are one of them is kept on 24/7 and could act as a server. Perhaps we should have a poll to confirm/reject my theory?

Besides, if we are talking about creating a “centralised management” product, to my way of thinking that implies that the product is located on one machine - even if it can be accessed from any machine.

The method we are discussing is as follows

1 ) First PC turned on announces itself as the master.
2 ) When it receives nil response it assume the role of “master”.
3 ) This PC would then connect and do a search for updates to ensure that the “master” has the most recent libraries.
4 ) As other PCs join the LAN, they attempt a “master” announcement and receive a DENY response from the existing master, which establishes the routing to the “master”. Similar to an LMAnnounce that windows workstations currently do.
5 ) Sub-ordinate updating and synchronising would commence from the “master” if updates are required (A flag from the “master” would be needed if it was in the middle of downloading the updates, to temporarily suspend sub-ordinate activity).
6 ) The “I’m the master” announcement from all PCs on the LAN would be repeated at a pre-determined interval (based on each workstations login time to stagger announcements from sub-ordinates), in case the “master” drops off the LAN. In this case, the remaining PCs on the LAN would re-arbitrate among themsleves to select a new “master”.
7 ) If the old “master” that dropped off the LAN logs on again, it would make an announcement as in step 4 but receive a negative response and would become a sub-ordinate to the newly arbitrated “master”.
8 ) When updates are being pushed to a sub-ordinate, have CPF temporarily block outbound traffic, and block inbound traffic from all sources other than the “master”, in case the update being installed is to fix a flaw currently on that PC.

This sounds very, very interesting… A couple of things, though:

  • It sounds like you would build this functionality into each Comodo product itself, allowing functionality for up to 3 clients before requiring some sort of paid licence. This then could be interpreted as creating a “Pro” version of the software, something which I caution against (see my earlier post).
  • In a “home” environment with no highly-available machines, the idea of arbitrating a master makes sense. However, in such an environment, the idea of centralised updates - one of the goals of centralised management - falls over. Each and every client would potentially get a turn at being the master and therefore download the updates. for a 5-PC LAN, we now end up with 5 sets of uncoordinated updates distributed over 5 PCs - which kind of goes against the definition of “centralised updates” :).

I think we have to first decide what this product is for. If one of its goals is to centralise updates and minimise the resultant internet traffic, then it makes very little sense to involve each and every client in the way you describe. If one of its goals is to centralise and distribute program policies, how can that happen when any individual PC can at any time be the master? How do we then communicate with the master when we want to update a firewall policy, for example? How do we know if its policy is up to date?

This method, I believe, relieves the burden of one fixed master, enhancing consistency and constancy. The announcements would only need to be a "blip" in terms of traffic and the interval could be as far apart as one hour.

Agreed - the network traffic required for arbitration would be minimal.

The one feature that is really required for Comodo apps is password protection (what good is a security app that is unsecured??? LOL). We've also discussed about including password protection fro the firewall and antivirus configuration and have this config passed around the lan along with the updates. This would prevent anyone workstation having the security turned off and further enhance both consistency and constancy.

Absolutely. Thinking further, I guess that if this new management system were to be implemented, there should be an option somewhere to “turn it off”, i.e. for a client to not be part of the management group. This could be a (password-protected?) option in the client, or perhaps as part of the management interface - you could interrogate the LAN for all Comodo client apps and then include/exclude them in managed groups.

Re. the sandpit, instead of having the sandpit layer as a memory resident portion sitting, waiting to be killed by malware, what if it consisted of a loader thatcalled the sandpit module. The loader portion can be loaded at regular intervals by the built in Windows Scheduler?

This way, the loader is only running when the schedule demands it (or on call by the user). The loader could be loaded into memory and subsequently load the sandpit module (under a randomly generated name) as needed and exit gracefully when the sandpit operation is completed. Of course, the loader would have to communicate the current random name to the firewall as a safe app. By this method, it only appears in memory on a transient basis. A moving target is harder to hit.

Possibly the sandpit and the updater/synchroniser could be amalgamated into a single layer.

Now I’m afraid you’ve got me… I’m a complete newbie when it comes to sandboxing/sandpitting. I’m gonna have to do some more googling/wikipedia’ing! :slight_smile: If you’re talking about extending the host intrusion detection/prevention capabilities of the firewall, isn’t this more of a client issue, rather than a central management issue? I am probably missing something… (time to hit the books)…

Cheers,

dooplex

The sticking point came when we considered how home lans are used, given that they don't have a single dedicated server (which seems to have been an assumption in your posting )
Yes, I guess that is where I was coming from. It has been my experience, however, that if a home has 3+ PCs connected together in a LAN, then chances are one of them is kept on 24/7 and *could* act as a server. Perhaps we should have a poll to confirm/reject my theory?

Great idea about the poll. I’ll try and remember to set it up tonight.

“could” being the operative word.

If the “server” was down, couldn’t this lead to the rest of the LAN being exposed? If the server was dynamically assigned, then there is no one thing to be down and no one single point of failure.

Besides, if we are talking about creating a "centralised management" product, to my way of thinking that implies that the product is located on one machine - even if it can be accessed from any machine.

I was thinking “centralised” in a very time specific context. Only one object on the LAN at any one time would contact Comodo and downloads all available updates. This one object would then co-ordinate the distribution of the updates to all other objects on the LAN. Home networks tend to be rather nebulous, with no fixed overall schema. This is why I have focussed on making the management match the environment - dynamic.

The method we are discussing is as follows

1 ) First PC turned on announces itself as the master.
2 ) When it receives nil response it assume the role of “master”.
3 ) This PC would then connect and do a search for updates to ensure that the “master” has the most recent libraries.
4 ) As other PCs join the LAN, they attempt a “master” announcement and receive a DENY response from the existing master, which establishes the routing to the “master”. Similar to an LMAnnounce that windows workstations currently do.
5 ) Sub-ordinate updating and synchronising would commence from the “master” if updates are required (A flag from the “master” would be needed if it was in the middle of downloading the updates, to temporarily suspend sub-ordinate activity).
6 ) The “I’m the master” announcement from all PCs on the LAN would be repeated at a pre-determined interval (based on each workstations login time to stagger announcements from sub-ordinates), in case the “master” drops off the LAN. In this case, the remaining PCs on the LAN would re-arbitrate among themsleves to select a new “master”.
7 ) If the old “master” that dropped off the LAN logs on again, it would make an announcement as in step 4 but receive a negative response and would become a sub-ordinate to the newly arbitrated “master”.
8 ) When updates are being pushed to a sub-ordinate, have CPF temporarily block outbound traffic, and block inbound traffic from all sources other than the “master”, in case the update being installed is to fix a flaw currently on that PC.

This sounds very, very interesting... A couple of things, though:

It sounds like you would build this functionality into each Comodo product itself, allowing functionality for up to 3 clients before requiring some sort of paid licence. This then could be interpreted as creating a “Pro” version of the software, something which I caution against (see my earlier post).
In a “home” environment with no highly-available machines, the idea of arbitrating a master makes sense.

There would need to be some communication between each of the individual apps to the separate management module, that’s unavoidable. Comodo are looking into this overall concept at the enterprise level, which is why they are looking at this as a paid module. I believe that this is an invaluable resource to have extended downwards into the home networking space, if only for reasons of constancy and consistency.

However, in such an environment, the idea of centralised updates - one of the goals of centralised management - falls over. Each and every client would potentially get a turn at being the master and therefore download the updates. for a 5-PC LAN, we now end up with 5 sets of uncoordinated updates distributed over 5 PCs - which kind of goes against the definition of "centralised updates" .

While each and every PC on the LAN “could” download updates, only one actually would. The first PC on the LAN would have announced itself as the “master” and when the others started up and attempted to announce themselves as the “master” they would have received a “deny” response from the “master” and would then take on the role of subordinates. Only the master would contact Comodo and download updates, which it would then distribute to the subordinates, at which point all PCs on the LAN would be synchronised. If the master gets turned off, a reabritration would occur at some point and a new "master would step in the fill the void left by the departing
“master”. Again, we still have only one PC downloading and distributing.

I think we have to first decide what this product is for. If one of iths goals is to centralise updates and minimise the resultant internet traffic, then it makes very little sense to involve each and every client in the way you describe.

I believe it makes less sense to knowingly introduce a single pioint of failure. :wink: Besides, whatever PC is contacting Comodo and distributing updates, it is only acting in that capacity in a transient manner.

If one of its goals is to centralise and distribute program policies, how can that happen when any individual PC can at any time be the master?

It is only the master in terms of downloading and distribution. Once passwording is introduced, a passworded login to the management module (which would be on all PCs, but only be in download/distribution mode on one) would allow a change to be affected on THAT PC, and those changes could then be distributed as a policy change, as opposed to an update change.

You’ve raised a very good point here. I think the management module needs to be able to operate in two modes. One mode would be if the PC is the arbitrated master and is capable of downloading and distributing updates for all Comodo apps. The second mode would be for application config, policy changes and password changes and would be available on any PC on the LAN, and would be entered into by logging in using the master password (which is distributed with the updates).

How do we then communicate with the master when we want to update a firewall policy, for example? How do we know if its policy is up to date?

See above.

Absolutely. Thinking further, I guess that if this new management system were to be implemented, there should be an option somewhere to "turn it off", i.e. for a client to *not* be part of the management group. This could be a (password-protected?) option in the client, or perhaps as part of the management interface - you could interrogate the LAN for all Comodo client apps and then include/exclude them in managed groups.

The primary thrust of my thinking is at the home LAN level. At the home LAN level, the entire idea is to make sure that no-one is different. Get it solid and then expand to a commercial version. Its easier to add functionality than to “nobble” an app and then wait for missing dependancies to crop up.

Do you think we should be focussing on the commercial side of things and then dumb it down for the home LAN, or should we start with the home LAN verson and build upon that?

Cheers,
Ewen :slight_smile:

What do you think?
Ewen :slight_smile:

I just noticed that we’ve not even touched upon what the GUI might look like - panic & I are both too fascinated with what will be under the hood :slight_smile: Bear with us, Melih - we’ll get there…

I figure you mean “exposed” as in “potentially not up-to-date”. Yes, this could be the case in a single server environment. But you could always allow the client software to fall back to standard internet updates.

Of course, we are not looking at this centralised management software solely for software updates. Policy distribution is an important task too - and there is no fall back solution for a one-server scenario (or at least I haven’t dreamed one up yet!)

I was thinking "centralised" in a very time specific context. Only one object on the LAN at any one time would contact Comodo and downloads all available updates. This one object would then co-ordinate the distribution of the updates to all other objects on the LAN. Home networks tend to be rather nebulous, with no fixed overall schema. This is why I have focussed on making the management match the environment - dynamic.

Yes, that makes sense. It’s an intriguing idea… One thing I like about it is that as far as I can see, the system with the most uptime would almost naturally become the master.

There would need to be some communication between each of the individual apps to the separate management module, that's unavoidable. Comodo are looking into this overall concept at the enterprise level, which is why they are looking at this as a paid module. I believe that this is an invaluable resource to have extended downwards into the home networking space, if only for reasons of constancy and consistency.

Heavens, yes. I know of no other vendor that is prepared to offer this level of sophistication in any free product - and most aren’t this clever even when you pay for them! Let’s hope we can get it past the vapourware stage :slight_smile:

While each and every PC on the LAN "could" download updates, only one actually would. The first PC on the LAN would have announced itself as the "master" and when the others started up and attempted to announce themselves as the "master" they would have received a "deny" response from the "master" and would then take on the role of subordinates. Only the master would contact Comodo and download updates, which it would then distribute to the subordinates, at which point all PCs on the LAN would be synchronised. If the master gets turned off, a rearbitration would occur at some point and a new "master" would step in the fill the void left by the departing "master". Again, we still have only one PC downloading and distributing.

Yep, got it. But what are we doing it for? Why have centralised updates, as opposed to each client downloading the update itself? I can think of only two possible reasons:

  • To minimise internet bandwidth usage, particularly for organisations with a large number of clients
  • To better control when the internet bandwidth required for updates is used

The first scenario has bandwidth benefits roughly proportional to the number of clients, and generally in environments with a large number of clients, there is a high-availability server lurking somewhere. If the environment is so dynamic as to truly benefit from a dynamically assigned master, then you’re not going to reduce internet bandwidth usage. Consider the following 3-PC home LAN:

  • All PCs are initially off.
  • PC-1 boots, declares itself the master, polls for and downloads updates.
  • PC-1 shuts down.
  • PC-2 boots, declares itself the master, polls for and downloads updates.
  • PC-2 shuts down.
  • PC-3 boots, declares itself the master, polls for and downloads updates.
  • PC-3 shuts down.

Three PCs essentially perform three internet updates - no bandwidth is saved. But, you say, that’s a pretty extreme example, and in that situation they may as well not use the centralised management software. I would agree with you. The more dynamic the network, the less you benefit from a feature like centralised updates.

…so why have centralised updates at all? I think it only really begins to make sense in an environment with many clients, a server somewhere, and limited internet bandwidth. As an aside, AVG only recommend their LAN-based update software in LANs with more than 15 clients.

I believe it makes less sense to knowingly introduce a single point of failure. ;) Besides, whatever PC is contacting Comodo and distributing updates, it is only acting in that capacity in a transient manner.

As long as the server is reasonably stable, it shouldn’t be a problem. I can foresee potentially more outage time caused by the delay between when a master goes offline and when the next arbitration/election occurs (point 6 on your original list). If we were to design such a system, this delay would have to be minimised. Perhaps the master should be pinged at regular intervals? Or the arbitration/election period should be set to an appropriately small value? Perhaps the master should send out a “heartbeat” once a minute or so, telling clients it is still there? The heartbeat information could include update/version information, which the clients could then use to determine whether any updates are required.

It is only the master in terms of downloading and distribution. Once passwording is introduced, a passworded login to the management module (which would be on all PCs, but only be in download/distribution mode on one) would allow a change to be affected on THAT PC, and those changes could then be distributed as a policy change, as opposed to an update change.

So the client would then forward the policy update to the master for distribution? Yep, that sounds like it would work… The policy would have to have some attached version information, to determine which policy should be propagated. The PC that is being used for the policy update must also ensure it is using the most current policy before the edit. If versioning were to be done by timestamp, some system should be in place to ensure that all clients have reasonably synchronised clocks.

The primary thrust of my thinking is at the home LAN level. At the home LAN level, the entire idea is to make sure that no-one is different. Get it solid and then expand to a commercial version. Its easier to add functionality than to "nobble" an app and then wait for missing dependancies to crop up.

Do you think we should be focussing on the commercial side of things and then dumb it down for the home LAN, or should we start with the home LAN verson and build upon that?

I think the desire to have everybody’s software configured the same is of great interest to the corporate market too - look at NT/2K policies and thin client (e.g. Citrix MetaFrame). Personally, I think a product such as this is best developed initially for the corporate environment, with a view to it working equally as well on a home (peer) network. The software is, at the end of the day, supposed to earn money for Comodo! This, of course, would mean we have to be careful about what assumptions we make on day one… If we design fantastic management software that relies on Active Directory, or a high-availability server, or even MMC, then that may very well limit its applicability to smaller setups.

Lastly… What about a method to automatically install Comodo software on client machines that do not yet have it? This is very appealing to the corporate market. Perhaps something called in a logon script?

Hang in there, Melih. I think we need to understand the motor before we decide on the upholstery. :wink:

I figure you mean "exposed" as in "potentially not up-to-date". Yes, this could be the case in a single server environment. But you could always allow the client software to fall back to standard internet updates.

If the workstations reabritrated, the need to fall back to standard updates methods is avoided. The standard method still needs to be there, though.

Yes, that makes sense. It's an intriguing idea... One thing I like about it is that as far as I can see, the system with the most uptime would almost naturally become the master.

If the “master” CAN be any machine on the LAN, uptime is no longer a factor and the reliance on a particular PC is removed.

Heavens, yes. I know of no other vendor that is prepared to offer this level of sophistication in any free product - and most aren't this clever even when you pay for them! Let's hope we can get it past the vapourware stage :)

If this master/subordinate model sees light of day, I think Comodo will scoop the home/small business market that do not have a dedicated server. I’ll try and hunt up some stats on the nature, size etc. of small business networks. Using my wife’s client list as a starting point, she has 114 companies on her client list. In total, their collective networks total 764 PCs. Of those 114 networks, 4 have a dedicated server and these 4 networks combined have just over 100 PCs connected to their respective servers. This leaves around 660 PCs spread over 110 networks that do not have the luxury of a dedicated server. I think this is too big a market segment to ignore, and may necessitate a server dependant model and a peer-to-peer LAN model. What do you think?

Yep, got it. But what are we doing it for? Why have centralised updates, as opposed to each client downloading the update itself? I can think of only two possible reasons:
  • To minimise internet bandwidth usage, particularly for organisations with a large number of clients
  • To better control when the internet bandwidth required for updates is used

The first scenario has bandwidth benefits roughly proportional to the number of clients, and generally in environments with a large number of clients, there is a high-availability server lurking somewhere. If the environment is so dynamic as to truly benefit from a dynamically assigned master, then you’re not going to reduce internet bandwidth usage. Consider the following 3-PC home LAN:

  • All PCs are initially off.
  • PC-1 boots, declares itself the master, polls for and downloads updates.
  • PC-1 shuts down.
  • PC-2 boots, declares itself the master, polls for and downloads updates.
  • PC-2 shuts down.
  • PC-3 boots, declares itself the master, polls for and downloads updates.
  • PC-3 shuts down.

Three PCs essentially perform three internet updates - no bandwidth is saved. But, you say, that’s a pretty extreme example, and in that situation they may as well not use the centralised management software. I would agree with you. The more dynamic the network, the less you benefit from a feature like centralised updates.

You seem to have missed what I thought was a core principle of the master/subordinate model. To my way of thinking, the main advantage it brings is consistency and constancy to the security layer across the LAN. Home/small business LANs usually do not have a geek to nurture them.

...so why have centralised updates at all? I think it only really begins to make sense in an environment with many clients, a server somewhere, and limited internet bandwidth. As an aside, AVG only recommend their LAN-based update software in LANs with more than 15 clients.

I think having all PCs on your LAN, regardless of the size of the LAN, consistent in their security makes an enormous amount of sense. :wink:

As long as the server is reasonably stable, it shouldn't be a problem. I can foresee potentially more outage time caused by the delay between when a master goes offline and when the next arbitration/election occurs (point 6 on your original list). If we were to design such a system, this delay would have to be minimised. Perhaps the master should be pinged at regular intervals? Or the arbitration/election period should be set to an appropriately small value? Perhaps the master should send out a "heartbeat" once a minute or so, telling clients it is still there? The heartbeat information could include update/version information, which the clients could then use to determine whether any updates are required.

You’re getting there. :wink: The heartbeat is a better option. It could be set as say every three or four minutes, and the absence of the heartbeat could trigger an arbitration. “The heartbeat information could include update/version information”. If the master/subordinate model is adopted, wouldn’t all connected PCs then already have the same update/version? In the case of a LAN failure causing discontinuity across the LAN, the original announce/deny sequence could include the current update/version info to re-establish consistency.

So the client would then forward the policy update to the master for distribution? Yep, that sounds like it would work... The policy would have to have some attached version information, to determine which policy should be propagated. The PC that is being used for the policy update must also ensure it is using the most current policy [b]before[/b] the edit. If versioning were to be done by timestamp, some system should be in place to ensure that all clients have reasonably synchronised clocks.

When you say “policy” are you meaning policy in the server sense of the word? If I’ve used the word “policy”, I apologise. I was thinking of it in terms of firewall rules, scan schedule, backup schedule, AV includes and excludes etc. - things peculiar to Comodo apps, not to the O/S. Sorry if I mislead you.

The more I think about it, the more I’ve come to realise that the management component needs to A) be on all connected PCs on the LAN and B) be able to “think” on two levels.

The first level is the master/subordinate level and is concerned with the master dragging stuff off the internet and the subordinates listening to accept updates for all Comodo apps or notifications that none exist.

The second level consists of the user definable changes to each application - firewall rules, backup schedules, AV scan schedules, passwords etc. This second level could be activated by logging in to the management component using a master password, which would send a signal across the LAN to prepare for a config change. when any changes are effected, the modified config could be passed around the LAN, again ensuring consistency, not just in updates, but in how the apps are configured.

Melih, I’ll start nutting out a process flow diagram for how I see this heading. It may take a few days, but I’ll postit as soon as I can.

I think that dooplex and I are looking at this from opposite ends of the spectrum, he from the corporate angle and me from the home small business end. Do you think it should be developed as a peer type product and uplifted to corporate, or start corporate and dumb it down for the home/small business segment?

Cheers,
Ewen :slight_smile:
(WCF3)

P.S. This is a really enjoyable brain strain! ;D

excellent ideas so far guys…
look forward to it Ewen
thanks
Melih

Hey all,

First draft of process flow for Comodo Management Component (CMC) attached. I’ll do more tomorrow and add how I see config changes being passed around. Hope this is along the lines of what you’re looking for.

cheers
Ewen
(WCF3) (WCF3) (WCF3) OI OI OI!

[attachment deleted by admin]

Hello panic.
I seen part of your word document but I don’t know if I am skipping something. Please forgive me if I am.
What I see in the proposal is the absent of some kind of authentication at step #16. If I were to be a hacker, I could probably send a broadcast to the network saying I have newer updates than the master and send “allow” rules that would poison the master.
Once the master is poisoned, all the clients receiving these rules will also get poisoned.
Am I wrong about this?

Hey memo,

Glad to have you on board.

I see your point, but there is an underlying assumption that the “master” can only be on the same subnet as the subordinates. If a poisoning was to oocur, the poisoner would have to be inside, or there would have to have been a trojan-like component installed on the LAN able to receive external command and issue the poison code, and I’d like to think that CPF would do something about this.

Also, the subordinates will only acknowledge “master” messages from the IP originally broadcast as the “master”. Hmmm - maybe the CMC could advise the firewall if a “master” message came from another IP other than the current “master” and segregate the address automatically. Just thinking out loud on this, but worth looking into. Thanks.

You just lit a lightbulb in my head. When the “master” logs off, the CMC could send a “resign” packet to all the other PCs on the LAN to signal a re-arbitration. The other PCs would, of course, still ping for the “master” during the normal program cycle, in case the “master” was just disconnected.

Couple of questons for you :

What do you think of the overall concept?
Can you spot any holes in the logic flow?
What would you do to tighten/improve the concept?
Do you see this as a commercial product or a home use product?
Do you think this should be a paid for add on?
Should it be paid for only to commercial sites and free to private (possibly restricting the number of connections?)?
If so, do you think it is viable - would enough private users know enough or care enough to pay for this type of product?

I’d really appreciate your input on this. I’ve had a ball thinking this stuff up so far, but it’s always better the bounce ideas around off other peoples perspectives.

I’m slanting my perspective on this towards the home LAN with a view to ensuring consistency and continuity, as I believe that the most vulnerable points on the internet (and the most commonly compromised) are home networks. Multiple PC households are becoming more and more common (at least here in Australia, they are) and most of them aren’t backed by sufficient knowledge to make them secure. Either that, or the kids turn stuff off so they can MSN quicker and download 40,000 tunes that they’ll never listen to (you gotta get your priorities right, after all ;)).

The other reason I’m looking more at the home market is simply because bot armies don’t usually live in a corporate environment and as they are widespread, they must, by necessity, exist on the other computing plateau - the private segment. If we can secure the private PCs, we can take a huge steps towards eliminating zombies.

What do you think?

Ewen :slight_smile:
(WCF3) (WCF3) (WCF3)

I think it would be a good idea to put Digital Signing for authentication so that requests could be authenticated. Hence no impersonation can be done unless you take over the machine that has the Private key part to the digital certificate that you used to digitially sign the request.

Melih

DING! And the lightbulb was turned on! This is so obvious I should have my bum kicked for not thinking to include this - a shared key across the LAN to ensure authenticity, with manual installation of the key from a hardware device (floppy, CD, flash drive - could be checked by the device byte descriptor on the media) mandated to prevent electronic key theft and propogation.

What about the idea of periodic reinforcement of the key installation from the same media? This would turn it into two factor something you know and something you have type authentication. It would, however place a reliance on the user retaining the original key instal media.

What do you think?

ewen :slight_smile:

yes would be good extra security…

Melih

This sounds like WPA to me :wink:
Now how would the authentication be implemented? Some words would be nice.
I think that in order to prevent poisoning from an “untrusted” source, sources would have to be limited in a way so to beat the peer-to-peer rules sharing method here. :-/

You would have all clients verifying the signature of the request by checking its digital signing!
this is about using Public Key Cryptography…

so when a client receives a request, it simply checks whether the request is digitally signed or not. Bit like the Code signing people use on their software etc.

Melih

Hey all,

Revised draft of Management Component is attached - purple text is modifed or added content. I’ve refined the data exchange logic and added the provision of multiple channels for intra-LAN comms. Authentication and config changes to be done in next draft, hopefully in a few days.

Sorry for the delay.

Ewen :slight_smile:
(WCF3) (WCF3) (WCF3)

[attachment deleted by admin]

Feel free - :wink: That’s what we’re all here for.

How do you think it should be done?

I think that in order to prevent poisoning from an "untrusted" source, sources would have to be limited in a way so to beat the peer-to-peer rules sharing method here. :-/

Categorically. One thought off the top of my head is to utilise the CD burning capabilites within Comodo Backup and burn the key to CD and then remove the original. This way, the key is remote to the file system until it’s installed and encrypted and a periodic reverification from the original media would prevent MITM spoofing of the key. P.S. Doesn’t have to be CD, but these are cheap, easy to create, durable and reproducable (but are only applicable to a particular installation, of course).

Do you think this is too fussy for home users?

Rgds,
Ewen :slight_smile:
(WCF3) (WCF3) (WCF3)

I think its bit too much for the average home user, bu i can see few using it :slight_smile:

Melih

So panic, is the current setup currently allowing any available computer to act as a master “server” as long as they are the first serving computer on the local network? (with encryption?)
I like the CD idea, where the custom generated key could be by default installed on client machines.

What I am thinking (don’t know if it goes with your current idea, though)

  • 2 versions of Comodo, which I will codename Client Comodo and Server Comodo
    – Client Comodo contains the client for retrieving rules from Server Comodo as well as the firewall itself
    – Server Comodo contains a packager for Client Comodo (to create a custom installer that has the master key built in - integrate into an EXE file) Server Comodo only features this packager and the server for serving rules, and the firewall itself is not installed. (maybe optional?)

Once Server Comodo is installed, it prompts the user whether to generate new keys or use existing ones. If the user choose to generate new keys, then one pair of keys could be generated (private and public key). (Private key would only be used by the master for serving rules and the Public key would be used for the packager and for the client) In addition, the Server Comodo also generates a unique GUID (used both by the server and client and included in the list of output files/keys) so that there could be different rules on the same network and no interference would occur between these different serving Comodos.

Once these keys are generated, the Server Comodo starts itself on the new generated keys and starts broadcasting the presence of itself with the unique GUID which identifies the rules it is serving so clients will know when to grab and when to not grab.

Anyways, continuing on with the story. The Server Comodo administrator could then start the Comodo Packager which opens up the public key and the GUID files and packages both of them into the latest version of Comodo Firewall inside an EXE file. (In the packager, there could also be a screen that allows the administrator to customize the setup parameters, as well as the default rules - there could be a rules creator for this, if not, feature a cloned interface of Comodo firewall (not actually the firewall since it is only for making rules and not active itself) as the rules creator)

Once the client opens the Client installer exe file, the packager extracts the setup along with the GUID and the public key and could go on with the installation process. Once Comodo Firewall is installed, Comodo listens on the local network for the server announcement matching the GUID it has installed. If the GUID matches, the client sents a request back to the Server Comodo (along with the Server Comodo IP address and/or possibly MAC address) and negotiate using the public/private keys (encrypted connection). Once a connection has been established between the client and the server, rulesets could be sent to the clients as well as any other update packages. (This process is the same for other clients)

Back to the Server Comodo, the Use existing key feature would open up the private key and GUID so that more than one instance of Server Comodo could be installed on the same network yet not causing any conflicts. When the Client Comodo sees an annoucement on the network, it will only take the first one so there shouldn’t be any conflicts between the different Server Comodos. In addition, the Client Comodo could/should cache the announcement requests from the Server Comodos that have the same GUID so it knows where to get the rulesets if it wants to and without having the wait for the next announcement message to be broadcasted on the network. This also can be used to minimize workload on a specific Server Comodo if a random server on the discover list were to be used. (A timeout value could be set to expire old servers on the list)

There should be also an option in the client which allows how frequent a ruleset should be updated so that it doesn’t update itself everytime an announcement message is received. (It will be insane if every machine on the network tries to update its ruleset everytime an announcement packet is broadcasted through the network)

Well, I hope this makes some sense… or does it? :wink:

Exactly. The whole idea was to create a subnetwork within the lan that had no central resource reliance, had redundancy of distribution and did the dissemination of comodo specific info across the lan.

Constancy and consistency are , IMHO, the most bit things missing from the vast majority of home LANS, and that’s where zombies live - not in the corporate world, but in our lounge rooms and kids bedrooms.

If we can introduce a level of control to our own environments, we have taken a step towards contributing towards the security of the internet as a whole.

cheers,
Ewen :slight_smile:

First, I would like to thank Ewen for pointing me to this topic. It is very intriguing and the vast amount of knowledge and creativity that resides within it is truly impressive. Well Done Gentlemen.

I would initially like to say that this is definitely a product I would be interested in, regardless of whether there were licensing fees involved. Currently, I work from home and run 2 PC’s on a Satellite Internet connection. For those who are familiar with Satellite internet, you know that I have a maximum amount of bandwidth I can consume before the infamous “Fair Access Policy” kicks in and my connection speed is throttled from a 1MB connection down to 56K. Needless to say, any amount of bandwidth saved is a blessing… I could definitely see a market for home LANS, especially those who are in the same situation as myself.

My second PC is primarily used by my internet savy 9 year old daughter which creates a whole new set of concerns aside from the typical security risks. If it was decided to develop this for the home LAN market, do you think it is possible to add Parental Controls into the mix of tasks it could perform? If this was answered already or in development on CPF or CAV, please point me in the right direction.

From what I have read so far, if my child was to boot her PC prior to mine, hers would assume the role of master and distribute the updates across the LAN to my desktop once it was booted. Would it be possible to then override hers and take over the master role, perhaps using a password so that activity on the now subordinate PC could be monitored? Perhaps flags could be raised if she visited sites that were deemed inappropriate or somewhat hazardous based on an updated list of urls by COMODO and/or manually entered into a stored list by myself. The url history could also be transferred with the heartbeat as well as the current timestamp of the Comodo updates.

If this was covered already or I am way off base (it is almost 5:30am here ) please feel free to point and laugh. I just know that while protection from trojans,malware…etc on my Daughters PC’s would sell me on the product and purchase thereof, her protection would sell me threefold.