Priya Jaiswal is a distinguished authority in banking, business, and finance, possessing a deep understanding of the structural and regulatory frameworks that underpin modern financial institutions. With a career defined by her mastery of market analysis and portfolio management, she has become a vital voice for banks navigating the complex digital landscape. In this conversation, we explore the precarious relationship between financial institutions and the small handful of core service providers that control their back-end operations. From the risks of legacy code to the looming talent crisis in archaic programming languages, Priya offers a roadmap for how banks can survive and thrive despite the inertia of their technological foundations.
The discussion focuses on the critical operational and regulatory bottlenecks created by a concentrated market of core providers. Key themes include the dangerous accumulation of custom code on legacy systems, the “uneven” power dynamics during contract negotiations, and the difficult financial calculus involved in multi-year system migrations.
When core providers fail to implement timely updates for changing regulations, what specific compliance risks do banks face, and how should an institution handle situations where the provider’s compliance team disagrees with an internal risk assessment? Please elaborate with a step-by-step approach to resolving these disputes.
When a provider lags on updates, the bank faces massive exposure because compliance failures often impact a large number of customers simultaneously, leading to systematic regulatory breaches. In my experience, these disputes often arise because providers are looking at their entire client base, while the bank is focused on its specific legal obligations. To resolve these, first, the bank must document the defect with granular data showing exactly how the current system logic violates a specific regulation. Second, you must escalate the issue to the provider’s compliance team, but do not go empty-handed; having an external assessment or legal opinion can provide the necessary leverage to prove your case. On at least three occasions where I have been involved, the provider initially disagreed with our assessment, but we reached a successful resolution by maintaining a firm, evidence-based stance. Finally, if the provider gives a timeline that doesn’t align with your upcoming exams, you must implement temporary manual controls or “wraparound” processes to mitigate risk while the permanent fix is being coded.
Many institutions layer custom code over legacy mainframe systems to manage costs. What are the long-term operational risks of this “compounding” code, and what specific steps are required to untangle these systems without disrupting essential functions like deposit and loan processing? Include any relevant metrics or anecdotes.
The primary risk is the “compounding” effect, where years of custom code are piled on top of other custom code, creating a tangled web that becomes nearly impossible to decipher. This often happens because banks choose to make updates in-house to avoid the high costs and long lead times of the core provider. To untangle this, an institution must first perform a comprehensive code audit to map out every custom dependency tied to their account management and loan processing. You then need to prioritize “clean-up” by identifying which layers are redundant and which are mission-critical, moving slowly to ensure that data integrity remains intact. I’ve seen situations where failing to document these layers properly leads to “defects” that take months to fix, purely because no one left on the team understands the original custom logic. It is a sensory overload for a programming team to try and peel back these layers without breaking the essential back-end functions that keep the bank running.
Community banks often navigate an uneven negotiating landscape with the few dominant service providers in the market. How can smaller banks better leverage their position during contract renewals, and what practical changes would you expect to see if federal agencies began exercising more direct oversight of these vendors?
Smaller banks are currently in a very “uneven” commercial negotiating relationship because three major players—Fiserv, Fidelity National Information Services, and Jack Henry—dominate the market. To gain leverage, community banks should consider collective bargaining through trade groups or associations to present a unified front on service-level agreements and cost structures. If the OCC or other federal agencies began direct oversight, I would expect to see mandatory “accountability” standards that force providers to fix defects within a standardized timeframe. Currently, many bankers and examiners have expressed deep concerns about the lack of responsiveness, and direct federal supervision could finally hold these vendors to the same rigorous standards as the banks they serve. This shift would likely result in more transparent pricing and more frequent, federally-mandated system updates that align with the rapid pace of regulatory changes.
Migrating to a new core system is a multi-year effort that can sometimes introduce more defects than the legacy setup. How should a bank weigh the financial risks of delaying an upgrade versus the potential for system failure during migration, and what key metrics determine if a transition is successful?
Choosing to migrate is a high-stakes gamble; I have seen a large bank switch to a brand-new system only to find they had more compliance issues afterward than they did with their decades-old mainframe. The financial risk of delaying an upgrade is “kicking the can” on technical debt, which only grows more expensive as the system becomes more fragile. To weigh these risks, a bank should use metrics like “mean time to repair defects” and the “cost of manual workarounds” required by the legacy system. A successful transition is not just about the “go-live” date; it is measured by data accuracy rates, the number of post-migration customer complaints, and whether the new system can service all products on one unified platform. It takes months, if not years, to build a migration plan that accounts for the specialized knowledge required to move the entire code base without a catastrophic failure.
With the pool of programmers specialized in older coding languages like COBOL shrinking as they reach retirement, how can banks bridge this looming talent gap? Does this labor shortage represent a forced turning point for the industry, and what are the primary barriers preventing more nimble competitors from entering the market?
This labor shortage is absolutely a forced turning point for the industry because we are reaching a cliff where the individuals who understand the “archaic” systems will simply be out of the workforce. To bridge this gap, banks must either invest in massive retraining programs or begin the painful process of migrating to more modern, open-architecture systems that use contemporary coding languages. The primary barrier to new, more nimble competitors is the incredibly high barrier to entry; the complexity of back-end functions and the regulatory hurdles make it difficult for disruptors to break the monopoly. Until more nimble players can prove they can handle the volume of a complex institution, banks are stuck with the major providers who are themselves grappling with the same talent shortages. It is a cycle of dependency where the lack of expertise on both sides creates a significant risk to the stability of the financial system.
What is your forecast for core service providers?
I forecast that core service providers will soon face a “reckoning of accountability” driven by increased pressure from the OCC and other federal regulators. As the talent pool for COBOL and other legacy languages evaporates, we will see a surge in forced migrations, which will initially be messy and expensive but will eventually lead to a more fragmented market with new, cloud-native entrants. Within the next few years, I expect the “big three” to undergo significant structural shifts, either through massive internal system overhauls or by acquiring the very disruptors that are currently trying to enter the space. The industry cannot maintain the current status quo of compounding custom code and uneven negotiations forever; the sheer weight of regulatory demand and technical obsolescence will mandate a complete modernization of the banking core.
