Hang in there, Melih. I think we need to understand the motor before we decide on the upholstery. 
I figure you mean "exposed" as in "potentially not up-to-date". Yes, this could be the case in a single server environment. But you could always allow the client software to fall back to standard internet updates.
If the workstations reabritrated, the need to fall back to standard updates methods is avoided. The standard method still needs to be there, though.
Yes, that makes sense. It's an intriguing idea... One thing I like about it is that as far as I can see, the system with the most uptime would almost naturally become the master.
If the “master” CAN be any machine on the LAN, uptime is no longer a factor and the reliance on a particular PC is removed.
Heavens, yes. I know of no other vendor that is prepared to offer this level of sophistication in any free product - and most aren't this clever even when you pay for them! Let's hope we can get it past the vapourware stage :)
If this master/subordinate model sees light of day, I think Comodo will scoop the home/small business market that do not have a dedicated server. I’ll try and hunt up some stats on the nature, size etc. of small business networks. Using my wife’s client list as a starting point, she has 114 companies on her client list. In total, their collective networks total 764 PCs. Of those 114 networks, 4 have a dedicated server and these 4 networks combined have just over 100 PCs connected to their respective servers. This leaves around 660 PCs spread over 110 networks that do not have the luxury of a dedicated server. I think this is too big a market segment to ignore, and may necessitate a server dependant model and a peer-to-peer LAN model. What do you think?
Yep, got it. But what are we doing it for? Why have centralised updates, as opposed to each client downloading the update itself? I can think of only two possible reasons:
- To minimise internet bandwidth usage, particularly for organisations with a large number of clients
- To better control when the internet bandwidth required for updates is used
The first scenario has bandwidth benefits roughly proportional to the number of clients, and generally in environments with a large number of clients, there is a high-availability server lurking somewhere. If the environment is so dynamic as to truly benefit from a dynamically assigned master, then you’re not going to reduce internet bandwidth usage. Consider the following 3-PC home LAN:
- All PCs are initially off.
- PC-1 boots, declares itself the master, polls for and downloads updates.
- PC-1 shuts down.
- PC-2 boots, declares itself the master, polls for and downloads updates.
- PC-2 shuts down.
- PC-3 boots, declares itself the master, polls for and downloads updates.
- PC-3 shuts down.
Three PCs essentially perform three internet updates - no bandwidth is saved. But, you say, that’s a pretty extreme example, and in that situation they may as well not use the centralised management software. I would agree with you. The more dynamic the network, the less you benefit from a feature like centralised updates.
You seem to have missed what I thought was a core principle of the master/subordinate model. To my way of thinking, the main advantage it brings is consistency and constancy to the security layer across the LAN. Home/small business LANs usually do not have a geek to nurture them.
...so why have centralised updates at all? I think it only really begins to make sense in an environment with many clients, a server somewhere, and limited internet bandwidth. As an aside, AVG only recommend their LAN-based update software in LANs with more than 15 clients.
I think having all PCs on your LAN, regardless of the size of the LAN, consistent in their security makes an enormous amount of sense. 
As long as the server is reasonably stable, it shouldn't be a problem. I can foresee potentially more outage time caused by the delay between when a master goes offline and when the next arbitration/election occurs (point 6 on your original list). If we were to design such a system, this delay would have to be minimised. Perhaps the master should be pinged at regular intervals? Or the arbitration/election period should be set to an appropriately small value? Perhaps the master should send out a "heartbeat" once a minute or so, telling clients it is still there? The heartbeat information could include update/version information, which the clients could then use to determine whether any updates are required.
You’re getting there.
The heartbeat is a better option. It could be set as say every three or four minutes, and the absence of the heartbeat could trigger an arbitration. “The heartbeat information could include update/version information”. If the master/subordinate model is adopted, wouldn’t all connected PCs then already have the same update/version? In the case of a LAN failure causing discontinuity across the LAN, the original announce/deny sequence could include the current update/version info to re-establish consistency.
So the client would then forward the policy update to the master for distribution? Yep, that sounds like it would work... The policy would have to have some attached version information, to determine which policy should be propagated. The PC that is being used for the policy update must also ensure it is using the most current policy [b]before[/b] the edit. If versioning were to be done by timestamp, some system should be in place to ensure that all clients have reasonably synchronised clocks.
When you say “policy” are you meaning policy in the server sense of the word? If I’ve used the word “policy”, I apologise. I was thinking of it in terms of firewall rules, scan schedule, backup schedule, AV includes and excludes etc. - things peculiar to Comodo apps, not to the O/S. Sorry if I mislead you.
The more I think about it, the more I’ve come to realise that the management component needs to A) be on all connected PCs on the LAN and B) be able to “think” on two levels.
The first level is the master/subordinate level and is concerned with the master dragging stuff off the internet and the subordinates listening to accept updates for all Comodo apps or notifications that none exist.
The second level consists of the user definable changes to each application - firewall rules, backup schedules, AV scan schedules, passwords etc. This second level could be activated by logging in to the management component using a master password, which would send a signal across the LAN to prepare for a config change. when any changes are effected, the modified config could be passed around the LAN, again ensuring consistency, not just in updates, but in how the apps are configured.
Melih, I’ll start nutting out a process flow diagram for how I see this heading. It may take a few days, but I’ll postit as soon as I can.
I think that dooplex and I are looking at this from opposite ends of the spectrum, he from the corporate angle and me from the home small business end. Do you think it should be developed as a peer type product and uplifted to corporate, or start corporate and dumb it down for the home/small business segment?
Cheers,
Ewen 
(WCF3)
P.S. This is a really enjoyable brain strain! ;D