I hope not AM....
For those techies that want to know what happened, below is an explanation I receive from the hosting provider.
Quote:
Dear Customer,
Due to very rapid growth over the last 18 months, we decided earlier this year to move our entire network into a new datacentre. This process began many months ago and we have been in the process of configuring, installing and testing equipment at our new facility for the last 8 weeks. The facility has now been operational and live for two weeks with the first set of live servers moved into it 9 days ago.
During this entire period, we encountered no issues with our network or data connections. On Friday 19th October, we moved the rest of our servers. This move was completed for all but two of our servers on schedule by 7AM on Saturday. The two remaining servers were then fixed over the remainder of Saturday.
Once again, all network and connectivity was working as expected at this point. We then had a core router failure on Sunday evening, resulting in an unscheduled outage of our services. The router was replaced by a backup router, but due to the recent move, the backup router was still at the old facility, meaning the replacement/switchover process took longer than it would normally take. Services were back to normal by about 10PM on Sunday.
Not surprisingly, the amount of traffic going through our network reaches it's peak during business hours. It was therefore only at just after 9AM on Monday that we noticed our main data connection was not able to handle the amount of data it was expected to handle. Once again, due to the fact we had only moved our network to the new facility, our backup data link was not yet scheduled to be moved to the new facility.
As a result, customers would have noticed that browsing sites on our network was quite slow and may have also experienced timeouts downloading their email from our servers.
Our network engineers began working on identifying the problem, as well as trying to activate our backup link into the new facility as quickly as possible. We also had the option of rolling back the entire move, but it was decided this was likely to cause more disruption than we would face from continuing as we were. As of 6:30PM on Tuesday, we were able to activate our backup link, thus providing adequate network capacity. As of 6:45PM, we were also able to resolve the issue with the main connection. Our services have therefore been fully operational since 6:30PM on Tuesday.
The cause of the problem with the main link was a mis-configuration of the link by our uplink provider, which took them a full 2 days to identify, acknowledge and resolve.
So, we now have our main link back to full capacity and we have a live backup link if needed. We also have our redundant router back online and installed in the same rack as the live router. One of the reasons for moving to the new data centre is that it provides more options for data links and has more redundancy built into it. Over the next few months, we will be rolling out a number of enhancements and additions to further improve performance, reliability and redundancy of our network and systems.
I would sincerely like to apologise for the inconvenience caused by the above issues. We take great pride in the stability and reliability of our networks, and spend considerable time and money on ensuring we have the best staff and the best equipment in place to achieve the best possible levels of stability and reliability.
I would like to thank all our customers for their understanding and patience during this time and hope to have the opportunity to continue providing our various high value services to you for a long time time to come.