To answer the individual comments - at the risk of seeming ornery
@kdennis
“The amount of outages in the last year has been significant,” there have been 2 ‘outages’ this year totaling ~11 hours. There have also been 2 periods this year where some emails to some domains were delayed by up to 4 hours, the ASG platform did not go down during these ‘delayed delivery’ periods.
“…and they’ve raised their prices on top.” this is the first price increase for ASG in many years. ASG has evolved significantly in this period with add-ons such as editable email templates, administrator delegation, User and Group permissions, Users login location history and GeoIP restrictions, Relay Restrictions, the Audit log, LDAP importing of user accounts, auto-discovery of user’s mailboxes, highly configurable blacklist/whitelist rule creation, white- and blacklisting per user and the bolt-on mail-backup Archive feature to mention but a few.
@Ossie44
“Yes agree, our subscriptions are not due until October so will be thinking seriously about switching to someone else such as MailGuard. Live and learn…” while we would hate to lose you we would hate it even more if you are dissatisfied with the service. There are some reviews on various vendor’s products here > http://community.spiceworks.com/cloud/anti-spam/reviews .
“So the load balancer has taken 7 hours to restart??” it is a little more complicated than rebooting a load-balancer, there are many checks and balances that need to be put in place leading up to a reboot of anything within the ASG infrastructure (more info follows below)
“realise one of the cleaners kicked out the power lead to the datacentre last night.” - cleaners are not allowed into our datacentre, neither are the developers, nor the DBAs nor the DevOps guys. Only hardware/networking guys and gals are allowed in and only with just cause. Principle of Least Privilege and all that.
Furthermore, we were aware of the issue within minutes of it happening and activated the required support teams as needed. The customer-facing support guys were informed that there was an issue as the 1st line devs/devops/DBA/NOC/SOC teams were all brought into play to investigate. The problem was then escalated to the 2nd line devs/devops/DBA/NOC/SOC teams as the 1st line guys did not have the access rights required to get to this particular problem after they identified potential causes. Then some of the 3rd line guys had to be brought in to access those areas that the 2nd liners are not allowed to get to. While all this was happening the hardware guys got called out to the datacenter itself in case cable-switching or hardware replacements had to happen. This is why it can take some time getting the platform re-stabilized, not because someone in Clifton realized something was amiss.
“As I expected the system came back up around the same time people would have arrived in Comodo’s US office.” no-one in Comodo HQ, barring myself and 1 other, has any access to the ASG back-end/hardware/infrastructure, Clifton staff only have configuration access at a global level. Principle of Least Privilege again.
“…do your customers outside of the US not matter to you” ALL customers matter regardless of where they are or how small/large their organizations are.
“Your company said it wouldn’t happen the last time in September 2014. I was silly enough to believe you then.” the Sept. 2014 incident was completely unrelated to what happened last week.
@kjhan
“I have to say, I’ve not been happy with Comodo Antispam Gateway” may I ask what you have not been happy with? Do you have any support tickets etc. wherein you have requested assistance? Please contact me directly if you wish to discuss.
@AngryCustomer2 – Please contact me directly.
Best,
Michél