I'm not working for a big financial institution anymore, and don't want to defend them, but you can't really know what kind of results can have this kind of "black swan" events. This too complex, the consequences of flapping services can be real nightmares. It's much much safer to power-off everything.
They all have contingency plans, but in most cases they are not 100% tested, and if you don't have people looking at "what's wrong with this the disaster recovery plan ?", with could have be the case here, you won't use these option if you have another one, easier.
Usually in trading, those who know don't talk, and those who talk don't know. (Al Brooks)
success requires no deodorant! (Sun Tzu)
The following 2 users say Thank You to sam028 for this post:
Very true @sam028 seems like need market structure redesign which I realize is "pie in the sky". I realize would take alot of money to pull structure change off but Wall Street is money so seems should be more protected. A structure that is less centralized with redundancy all over the country would be one answer where if one location goes down nothing is really shut off but things just keep ticking. That being said I am not smart enough to understand how difficult it might be to pull that off and the new problems it might create.
"The day I became a winning trader was the day it became boring. Daily losses no longer bother me and daily wins no longer excited me. Took years of pain and busting a few accounts before finally got my mind right. I survived the darkness within and now just chillax and let my black box do the work."
I would agree it is very complex, but that is what they pay big money for - to deal with it.
A disaster recovery plan that is not tested is not a recovery plan. It means a lot of money gets wasted to make people feel safe when there is nothing there to feel safe about.
This was different than 9/11 because they saw it coming. At that point, you do the "advanced" part of advanced planning and staff up - coordinate the necessary resources - sleep there on cots and eat sandwiches for the duration.
The bonus they gave themselves after 9/11 was essentially to show up and flip the ON switch.
The following 2 users say Thank You to Tarkus11 for this post:
Before becoming a full time trader, I worked as a consultant for Oracle Corporation, our group was created specifically to support Oracle's largest 100 clients, and disaster recovery was one of our specialties. My last client was Fedex, They had uptime requirements of around 99.99%, if I recall, and we along with HP designed their disaster recovery plans and infrastructure for them.
They had parallel server architecture for scaling, and failover server architecture for disaster scenarios. The failover servers were constantly up to date within x minutes from real time live servers. In an unplanned disaster of the main site (i.e earthquake, terrorism), the failover site (in a different geographical location) will automatically detect that the main site is down, and would automatically take over the operation. In a planned failover (i.e. hurricane warning), we would have a planned scheduled failover.
One of the most important aspects of any disaster recovery plan is actually regularly test the plan. Many companies would put together elaborate disaster recovery plans, never test them, and find out there were major glitches when the plan had to be engaged for an actual disaster.
Last edited by monpere; October 31st, 2012 at 03:23 PM.
The following 2 users say Thank You to monpere for this post: