We already covered how mainframe modernization isn’t just for the financial industry, so why not address the elephant in the room? The world’s biggest modernization challenges are concentrated in the banking industry.
Before the internet and cloud computing, and before smartphones and mobile apps, banks were shuttling payments through massive electronic settlement gateways and operating mainframes as systems of record.
Financial services companies are considered institutions because they manage and move the core aspects of our global economic system. And the beating heart of financial institutions is the IBM mainframe.
Banks have the most to gain if they succeed (and the most to lose if they fail) at bringing their mainframe application and data estates up to modern standards of cloud-like flexibility, agility and innovation to meet customer demand.
Why mainframe application modernization stalls
We’ve experienced global economic uncertainties in recent memory, from the 2008 “too big to fail” crisis to our current post-pandemic high interest rates causing overexposure and insolvency of certain large depositor banks.
While bank failures are often the result of bad management decisions and policies, there’s good reason to attribute some blame to delayed modernization initiatives and strategies. Couldn’t execs have run better analyses to spot risks within the data? Why did they fail to launch a new mobile app? Did someone hack them and lock customers out?
Everyone knows there’s an opportunity cost of putting off mainframe application modernization, but there’s a belief that it’s risky to change systems that are currently supporting operations.
Community and regional banks may lack the technical resources, whereas larger institutions have an overwhelming amount of technical debt, high-gravity data movement issues, or struggle with the business case.
Banks large and small have all likely failed on one or more modernization or migration initiatives. As efforts are scrapped, IT leaders within these organizations felt like they bit off more than they could chew.
Transforming the modernization effort should not require a wholesale rewrite of mainframe code, nor a laborious and expensive lift-and-shift exercise. Instead, teams should modernize what makes sense for the most important priorities of the business.
Here are some great use cases of banks that went beyond simply restarting modernization initiatives to significantly improve the value of their mainframes in the context of highly distributed software architectures and today’s high customer-experience expectations.
Transforming core system and application code
Many banks are afraid to address technical debt within their existing mainframe code, which may have been written in COBOL or other languages before the advent of distributed systems. Often, the engineers who designed the original system are no longer present, and business interruptions are not a good option, so IT decision-makers delay transformation by tinkering around in the middle tier.
Atruvia AG is one of the world’s leading banking service technology vendors. More than 800 banks rely on their innovative services for nearly 100 billion annual transactions, supported by eight IBM z15 systems running in four data centers.
Instead of rip-and-replace, they decided to refactor in place, writing RESTful services in Java alongside the existing COBOL running on the mainframes. By gradually replacing 85% of their core banking transactions with modern Java, they were able to build new functionality for bank customers, while improving performance of workloads on the mainframe by 3X.
Ensuring cyber resiliency through faster recovery
Most banks have a data protection plan that includes some form of redundancy for disaster recovery (DR), such as a primary copy of the production mainframe in the data center and perhaps an offsite secondary backup or virtual tape solution that gets a new batch upload every few months.
As data volumes inexorably increase in size, with more transactions and application endpoints, making copies of them through legacy backup technologies becomes increasingly costly and time-consuming, and reconstituting them is also slow, which can leave a downtime DR gap. There is a critical need for timelier backups and recovery to failsafe the modern bank’s computing environment, including ransomware.
ANZ, a top-five bank in Australia, sought to increase its capacity for timelier mainframe backups and faster DR performance to ensure high availability for its more than 8.5 million customers.
They built out an inter-site resiliency capacity, running mirrored IBM zSystems servers using their HyperSwap function to enable multi-target storage swaps without requiring outages, as any of the identical servers can take over production workloads if one is undergoing a backup or recovery process.
ANZ’s IT leadership gets peace of mind thanks to better system availability; but more so, they now have a modern disaster recovery posture that can be certified to provide business continuity for its customers.
Gaining visibility through enterprise-wide business and risk analytics
Banks depend on advanced analytics for almost every aspect of key business decisions that affect customer satisfaction, financial performance, infrastructure investment and risk management.
Complex analytical queries atop huge datasets on the mainframe can eat up compute budgets and take hours or days to run. Moving the data somewhere else—such as a cloud data warehouse—can come with even greater transport delays, resulting in stale data and poor quality decisions.
Garanti BBVA, Turkey’s second-largest bank, deployed IBM Db2 Analytics Accelerator for z/OS, which accelerates query workloads while reducing mainframe CPU consumption.
The separation of analytics workloads from the concerns and costs of the mainframe production environment allows Garanti to run more than 300 analytics batch jobs every night, and a compliance report that used to take two days to run now only takes one minute.
Improving customer experience at DevOps speed
Banks compete on their ability to deliver innovative new applications and service offerings to customers, so agile devtest teams are constantly contributing software features. We naturally tend to generalize these as front-end improvements to smartphone apps and API-driven integrations with cloud services.
But wait, almost every one of these new features will eventually touch the mainframe. Why not bring the mainframe team forward as first-class participants in the DevOps movement so they can get involved?
Danske Bank decided to bring nearly 1,000 internal mainframe developers into a firm-wide DevOps transformation movement, using the IBM Application Delivery Foundation for z/OS (ADFz) as a platform for feature development, debugging, testing and release management.
Even existing COBOL and PL/1 code could be ingested into the CI/CD management pipeline, then opened and edited intuitively within developers’ IDEs. No more mucking with green screens here. The bank can now bring new offerings to market in half the time it used to take.
Read the Danske Bank case study https://www.ibm.com/case-studies/danske_bank_as
The Intellyx Take
Even newer “born-in-the-cloud” fintech companies would be wise to consider how their own innovations need to interact with an ever-changing hybrid computing environment of counterparties.
A transaction on a mobile app will still eventually hit global payment networks, regulatory entities and other banks—each with their own mainframe compute and storage resources behind each request fulfillment.
There will never be a singular path forward here because no two banks are identical, and there are many possible transformations that could be made on the mainframe application modernization journey.
IT leaders need to start somewhere and select use cases that are the best fit for their business needs and the architecture of the unique application estate the mainframe will live within.
The post Banking on mainframe-led digital transformation for financial services appeared first on IBM Blog.