Payment Processors Go for 100 Percent Uptime

Mary E. Shacklett, President, Transworld Data | 11/14/2011 | 12 comments

Mary E. Shacklett
In most circles, "five-nines" uptime is standard for 24/7 datacenter operations, but there are now payment processors targeting 100 percent uptime for their banking and consumer customers -- and they are taking every step to assure that kind of performance.

Is this realistic? In the mainframe world where most of these payment processors operate, some CIOs tell me they have never seen the mainframe go down for as long as they have held their positions. In one case, the CIO has been in his job for over 20 years. But the challenge for 100 percent uptime goes beyond mainframe resources and extends to data communications and the distributed side of payment processor datacenters.

Moving to 100 percent uptime is not a trivial pursuit. It takes planning, investment, and the patience to see a datacenter project of this magnitude through. Common elements in these projects include:

  • Control over at least two datacenters and the use of a third for backup. Because of the urgency of the payment processing business, the major players all own and control their own datacenters. The starting point for any 100 percent uptime plan revolves around transforming these datacenters so their respective resources and applications are all running in parallel with one another. The third leg of the datacenter side of the project is to have a third-party datacenter for disaster recovery and backup standing in the wings as extra insurance in the event that corporate datacenters are destroyed or compromised.

  • Redundant data communications between corporate datacenters and fast datacom inside each datacenter. Payment processors use multiple data communications topologies (fiber optic, Ethernet, etc.) to transport data between their datacenters. They do this for redundancy and load leveling, as well as for speed. Within the datacenter itself, they are likely to use a data flow architecture like Infiniband to move data quickly between processors and I/O devices.

  • Parallel processing and data mirroring. The goal is to parallel process data on two separate processors in two separate facilities simultaneously, so that if you need to failover from one to the other, there is no disaster recovery and backup. You simply keep going. This is accomplished by clustering machines together into what appears to be a single system image that allows for data sharing and parallel computing. This is run in combination with systems management software capable of managing the end-to-end computing and data resources and performing automated failovers.

  • Centralized staff. Instead of replicating every staff position at both datacenters, the goal is to leverage the well-developed skills of a single staff that is strategically spread between the datacenters. This ability to streamline staff is largely due to the amount of automation that has been introduced into the datacenters.

  • A patient migration of applications. Understandably, making a move to an environment like this carries heavy risk. Those who are undertaking such projects plan for implementation periods of many months as they test and retest each application before they move it into the second datacenter for parallel processing.

  • Transparency minus one. The objective of painstakingly testing, retesting, and migrating applications for parallel processing over many months is to make the move to a 100 percent uptime, parallel processing environment without your customers ever finding out about the project -- because everything continues to run flawlessly while you are doing this. However, when the migration of applications is finally complete, you are going to have to perform and test for the ultimate failover. Because there is always the risk that something could go wrong, you must notify your customers in advance and secure the test window that you need.

Due to the costs and the risks, 100 percent uptime targets aren’t for everyone, but in banking and finance, they are the future. Today’s task for technology companies and payment processors is to develop methodologies that simplify this transition -- which almost certainly will be expected in tomorrow’s financial marketplace.

View Comments: Newest First | Oldest First | Threaded View
Page 1 / 2   >   >>
Damian Romano   Payment Processors Go for 100 Percent Uptime   11/15/2011 1:14:53 PM
Re: Why?
@Hospice - I've been involved in product and development in the past and have seen how budgets are considered. In an instance like this, the bank will likely evaluate how to pass this cost off onto the customer and promote it as an additional service. I suppose the right institution may not necessarily make it reflect upon the customer overtly, but inevitably the customer will feel the impact somehow.
Damian Romano   Payment Processors Go for 100 Percent Uptime   11/15/2011 1:09:10 PM
Re: Why?
@Sara - There are certainly benefits to having the 100% uptime for sure. But I just don't think the cost would ultimately be worth it. Surely much of this is speculation because we don't have hard dollar figures. With working in banking for the past 10+ years in a variety of functions, you get to know what customers want. Personally I dont think the customers would see the value in having his up-time unless they don't incur any charge.

I spend a lot of time trying to persuade financial institution-haters that banks have to make money too. They're a business just like everyone else. But the biggest backlash I hear is, "why do I have to pay to get my own money?" So if you were to offer the 100% up-time for an additional $2.50 a month per account, I doubt that would go over very well. :)
Ivan Schneider   Payment Processors Go for 100 Percent Uptime   11/15/2011 12:03:36 PM
Re: Why?
@HH

You can be 24/7 without having 100% uptime. "Five nines" (99.999%) uptime means that a service is only down for 5.26 minutes per year (see "high availability" on Wikipedia), and "six nines" is 31.5 seconds/year. 100% uptime is 0.0 seconds per year, which means that there's neither planned downtime nor unplanned downtime.
User Ranking: Blogger
Ivan Schneider   Payment Processors Go for 100 Percent Uptime   11/15/2011 11:50:43 AM
Re: Why?
@Sara: 

First, we should make a distinction between internet banking and payment processors.

The typical internet banking site is far from 100% uptime either in reality or in aspirations. I would say the vast majority of these sites have scheduled downtime at some point, typically at 3am on a weekend or some other lightly-trafficked time. You may get a message, "Temporarily down, come back in an hour." No big deal. Considering the need to do periodic patch updates or other maintenance tasks, there's currently no way getting around this without spending a great deal of money for parallel infrastructure. Based on adoption rates, consumers have already accepted Internet banking "as-is" with periodic downtime, and I see no evidence of a huge public clamoring for higher quality of service.

The original blog was talking about payment processing, which are the third parties that sit between merchants and their banks. There is more at stake here, because nobody wants a transaction to fail even at 3am. And I agree that there should be extremely robust disaster recovery and business continuity planning, and that the service should work 24/7.

However, once you set the bar at "100 percent uptime" that justifies a level of investment that may extend well beyond the business benefit to end users. If there's a huge earthquake, flood, etc., and payments go down for an hour or two, well, so what? As long as the service comes back online, that's the important part. Occasionally, we're going to have a power outage, and occasionally we're going to have a financial outage. It's not the end of the world. The industry should pay attention to resiliency instead of uptime.

Yes, the banking industry is part of the nation's critical infrastructure, just like the power grid. But that doesn't mean the payments network needs to have the same always-on availability as the control systems to a nuclear power plant.

Finally, no institution can promise 100 percent uptime. All you can do is set that as your goal and set prioirites accordingly. My point is that financial institutions should set more reasonable uptime goals and focus on things that matter more to customers, such as price, resilience and security. There's only so much money in the operating budget, and 100% uptime is not the most important criteria in financial services.
User Ranking: Blogger
Hospice_Houngbo   Payment Processors Go for 100 Percent Uptime   11/15/2011 11:21:20 AM
Re: Why?
@Damian Romano:

"What's the true cost of having 100% uptime?". Why do you think that customers would have to bear additional costs if the banks or financial intitutions go 100 percent uptime? My banks' online bankings services work 24/7 and I can make transfers from one account to another instantly without any disruption. The only problem I noticed is that some accounts take time to get updated, but so far I'm fine with that. I will probably not be happier if I was asked to pay additional costs for a "slightly" better service.
Sara Peters   Payment Processors Go for 100 Percent Uptime   11/15/2011 11:06:31 AM
Re: Why?
I don't know Damian,  I think that plenty of customers would pay a bit more for 100% uptime. Of course that depends on exactly how much more money we're talking about. If the monthly fee goes up by $1, they might go for it, if it goes up $2 or more, they might not. But when a customer NEEDS to make a transfer from their savings account to their checking account -- because otherwise they'll bounce a check or be late on a payment, which might result in them having their electricity turned off, or their credit score slammed, or their husband in jail overnight, or whatever -- they'll be happy they stayed with a bank that gives them 100% uptime.
Damian Romano   Payment Processors Go for 100 Percent Uptime   11/15/2011 8:50:22 AM
Re: Why?
@Ivan - I tend to agree with just about every sentiment you made mention. What's the true cost of having 100% uptime? Is it really worth adding the incredible additional expense and then passing that to the customer just for the customer to know that they'll always be able to have the 100% uptime? I'll bet if you surveyed most people you'd probably find most have never experienced downtime. And if they did, it was only once or twice. Is that really worth it?
Mary E. Shacklett   Payment Processors Go for 100 Percent Uptime   11/15/2011 8:31:00 AM
Re: Murphy's law?
Situations like the one you described, Gigi, are what makes premier payment processing performance critical.
User Ranking: Blogger
Gigi   Payment Processors Go for 100 Percent Uptime   11/14/2011 10:39:27 PM
Gigi
Re: Murphy's law?
Mary, it’s necessary that the payment gateways should be 100% up in a 24/7 manner.  Otherwise, from user point of view it makes lots of issues. Once I had faced a similar problem with the payment gate way, while travelling I tried to book an international flight ticket from my saving bank account. After making the payment, I got a conformation saying that we are not able to process your request because the payment gate way is down and the worst part is the amount is deducted from my account.  So I think making the payment up for 100% is more beneficial and can increase the business too.
Ivan Schneider   Payment Processors Go for 100 Percent Uptime   11/14/2011 5:11:55 PM
Why?
Who's pushing for 100 percent uptime? This functionality certainly isn't free. As you pointed out, such a target requires multiple owned-and-operated datacenters, multiple communications links, and higher staffing costs.

Who in the marketplace is saying, "Well, cost control is important in our organization, but what's more important is 100% uptime."

Are there consumers saying, "I don't mind paying extra fees to my bank, or paying a little extra for consumer goods, as long as I know that I have 100 percent uptime."

I suspect that what's really happening is that increased regulation is pushing the financial services industry in the direction of rate-regulated energy utilities. Utilities can't just raise rates whenever they want, as they're constrained by state rate control boards. However, they're allowed to make a reasonable return on assets. That creates the incentives for investor-owned utilities to undertake the construction of huge projects, a.k.a. "featherbedding." If you want to increase revenue, increase the asset base. And now that the payments processors [edit: was "banks"] are being turned into utilities, they're turning to the same playbook.

If the central payment processors in the industry are going to make a move to 100 percent uptime, I'd like have some assurance that whatever line of business they're in is subject to open and vibrant competition before they increase prices on everyone for marginal benefit.
User Ranking: Blogger
Page 1 / 2   >   >>


The blogs and comments posted on EnterpriseEfficiency.com do not reflect the views of TechWeb, EnterpriseEfficiency.com, or its sponsors. EnterpriseEfficiency.com, TechWeb, and its sponsors do not assume responsibility for any comments, claims, or opinions made by authors and bloggers. They are no substitute for your own research and should not be relied upon for trading or any other purpose.

More Blogs from Mary E. Shacklett
Mary E. Shacklett   12/7/2011   7 comments
The promise of internal cloud infrastructures for organizations rests in a much hoped for seamless ability to traverse heterogeneous operating systems and hardware platforms, delivering ...
Mary E. Shacklett   12/5/2011   8 comments
Companies have recognized that they need to function 24/7 in a global economy -- and that they can ill afford to experience disasters and outages. This has given some "lift" to disaster ...
Mary E. Shacklett   12/2/2011   6 comments
Nothing is more important than your disaster recovery (DR) and business continuation plan when you really need it -- but the odds are pretty high that you never will. Hardly anyone ever ...
Mary E. Shacklett   11/29/2011   35 comments
CIOs have to know the business and how IT delivers value, but they also have to understand enough of the technology to inspire and earn the respect of their staff. Where is the fine line ...
Mary E. Shacklett   11/23/2011   24 comments
Right now, companies are striving to get their arms around real-time analytics and what it can mean for their businesses, but fraud detection software has been working in real-time for ...
Latest Archived Broadcast
In this episode, you'll learn how to stretch the limits of your private cloud -- and how to recognize the limits that can't be exceeded.
On-demand Video with Chat
IT has to deploy Server 2012 in a way that fits the architecture of its application delivery system.
E2 IT Migration Zones
IT Migration Zone - UK
Why PowerShell Is Important
Reduce the Windows 8 Footprint for VDI
Rethinking Storage Management
IT Migration Zone - FR
SQL Server : 240 To de mémoire flash pour votre data warehouse
Quand Office vient booster les revenus Cloud et Android de Microsoft
Windows Phone : Nokia veut davantage d'applications (et les utilisateurs aussi)
IT Migration Zone - DE
Cloud Computing: Warum Unternehmen trotz NSA auf die „private“ Wolke setzen sollten
Cloud Computing bleibt Wachstumsmarkt – Windows Azure ist Vorreiter
Like Us on Facebook
Twitter Feed
Enterprise Efficiency Twitter Feed
Site Moderators Wanted
Enterprise Efficiency is looking for engaged readers to moderate the message boards on this site. Engage in high-IQ conversations with IT industry leaders; earn kudos and perks. Interested? E-mail:
[email protected]
Informed CIO: Dollars & Sense: Virtual Desktop Infrastructure
Cut through the VDI hype and get the full picture -- including ROI and the impact on your Data Center -- to make an informed decision about your virtual desktop infrastructure deployments.

Read the full report
Virtualization Management: Time To Get Serious
Welcome to the backside of the virtualization wave. Discover the state of virtualization management and where analysts are predicting it is heading

Read the full report
PUBLIC SECTOR RESOURCES
WHITE PAPERS
A Video Case Study – Translational Genomics Research Institute
e2 Storage Video


On the Case
TGen IT: Where We're Going Next

7|11|12   |   08:12   |   10 comments


Now that TGen has broken new ground in genomic research by using Dell's storage, cloud, and high-performance computing solutions, the company discusses what will come next for it and for personalized medicine.
On the Case
Better Care Through Better Communications

6|6|12   |   02:24   |   11 comments


The achievements of the TGen/Dell project could improve how all people receive healthcare, because they are creating ways to improve end-to-end communication of medical data.
On the Case
TGen IT: Where We Are Now

5|15|12   |   06:58   |   6 comments


TGen is breaking new ground in genomic research by using Dell's storage, cloud, and high-performance computing solutions.
On the Case
TGen IT: Where We Were

4|27|12   |   06:45   |   10 comments


The Translational Genomics Research Institute wanted to save lives, but its efforts were hobbled by immense computing challenges related to collecting, processing, sharing, and storing enormous amounts of data.
On the Case
1,200% Faster

4|18|12   |   02:27   |   12 comments


Through their partnership, Dell and TGen have increased the speed of TGen’s medical research by 1,200 percent.
On the Case
IT May Improve Children's Chances of Survival

4|17|12   |   02:12   |   8 comments


IT is helping medical researchers reach breakthroughs in a way and pace never seen before.
On the Case
Medical Advances in the Cloud

4|10|12   |   1:25   |   5 comments


TGen and Dell are pushing the boundaries of computing, and harnessing the power of the cloud to improve healthcare.
On the Case
TGen: Living the Mission

4|9|12   |   2:25   |   3 comments


TGen's CIO puts the organizational mission at the heart of everything the IT staff does.
On the Case
TGen Speeding Up Biomedical Research to Save More Lives

4|5|12   |   1:59   |   6 comments


The Translational Genomics Research Institute is revamping its computing to improve speed, storage, and collaboration – and, most importantly, to save lives.
On the Case
Computing Power Helping to Save Children's Lives

3|28|12   |   2:13   |   3 comments


The Translational Genomics Institute’s partnership with Dell is enabling them to treat kids with neuroblastoma more quickly and save more lives.
Tom Nolle
How Deep Is My Storage Hierarchy?

7|3|12   |   2:13   |   5 comments


At the GigaOM Structure conference, a startup announced a cloud and virtualization storage optimizing approach that shows there's still a lot of thinking to be done on the way storage joins the virtual world.
E2 Interview
What Other Industries Can Learn From Financial Services

6|13|12   |   02:08   |   3 comments


We asked CIO Steve Rubinow what CIOs in other industries can learn from the financial services industry about datacenter efficiency, security, and green computing.
E2 Interview
Removing Big-Data Flow Bottlenecks

6|12|12   |   02:55   |   No comments


We ask CIO Steve Rubinow what pieces of financial services infrastructure need to perform better to get traders info faster.
E2 Interview
Getting Traders the Data They Need

6|11|12   |   02:04   |   1 comment


We ask CIO Steve Rubinow: What do stock market traders need to know, how fast do they need it, and how can CIOs get it to them?
E2 Interview
Can IT Help Fix the Global Economy?

6|8|12   |   02:32   |   2 comments


We ask CIO Steve Rubinow whether today's IT can help repair the global economy (and if IT played any role in the economy's collapse).
E2 Interview
More Competitive Business via Datacenter Strategy

5|4|12   |   2:46   |   1 comment


Businesses need to be competitive, yet efficient, and both goals affect datacenter design.
E2 Interview
The Recipe for Greater Efficiency

5|3|12   |   3:14   |   2 comments


Intel supplies the best ingredients to drive greater datacenter efficiency and support new compute, storage, and networking needs.
E2 Interview
Datacenters Enabling Business Transformation

5|1|12   |   06:37   |   1 comment


Dell’s Gaurav Chand says that for the first time ever datacenter technology is truly enabling all kinds of organizations to transform their business and achieve new objectives.
Tom Nolle
Cloud Data: Big AND Persistent!

3|28|12   |   2:11   |   10 comments


We always hear about "Big" data, but a real issue in cloud storage is not just bigness but also persistence. A large data model is less complicated than a big application repository that somehow needs to be accessed. The Hadoop send-program-to-data model may be the answer.
Tom Nolle
Project Lightning Streamlines Storage

2|16|12   |   2:09   |   2 comments


EMC's Project Lightning has matured into a product set, and it's important, less because it has new features or capabilities in storage technology and management, than because it may package the state of the art in a way more businesses can deploy.
Tom Nolle
Big Data Appliance Is Big News

1|12|12   |   2:18   |   No comments


Oracle's release of a Hadoop appliance for Big Data may be a signal that we're shifting to database appliances.
Tom Nolle
Myopia Can Hurt Storage Policy

12|22|11   |   2:08   |   No comments


We're at the beginning of a cloud-driven revolution in storage, but Oracle's quarter shows that enterprises are hunkering down on old concepts because they're afraid of the costs in the near term.
Sara Peters
An Untrained User & a Mobile Medical Device

12|19|11   |   2:43   |   11 comments


Untrained end users, clueless central IT staff, and expensive mobile devices are a worrisome combination for healthcare CIOs.
Tom Nolle
Too Many Labels on 'Big Data'?

12|9|11   |   2:12   |   3 comments


However you label it, structured and unstructured information are different and will likely always require different tools.
Sara Peters
E2 Debuts New Storage Section

12|8|11   |   1:51   |   1 comment


Need strategic guidance on everything from SSDs to 100 percent virtualized datacenters? Look no further.