Priya Jaiswal is a recognized authority in Banking, Business, and Finance, with extensive expertise in market analysis, portfolio management, and international business trends. She joins us to discuss the seismic shifts occurring as financial institutions grapple with technological transformation. Our conversation will explore the hidden costs of fragmented systems, the strategic deployment of AI for both internal efficiency and customer engagement, and the pivotal role of multi-cloud architectures in unlocking data’s true potential. We will also delve into practical frameworks for data refinement and a pragmatic approach for institutions to begin their AI journey without getting overwhelmed.
You mentioned that financial institutions are racking up technical debt from fragmented systems. Could you share a specific example of this and walk us through how a converged platform directly reduces the operational overhead and regulatory risk you described?
Absolutely. Over the last 10 to 15 years, banks adopted a slew of “fit-for-purpose” technologies. They were great for solving specific problems, but the result is a tangled web of systems that don’t speak to each other. A classic example is having customer data spread across a core banking system, a separate wealth management platform, and a third-party lending application. To get a single view of that customer, the bank has to rely on complex, clunky extract, transform, and load—or ETL—processes. It’s a constant, manual effort that creates immense tech debt. This isn’t just inefficient; it’s risky. Every time you create a copy of data and move it, you open a new door for security breaches or compliance missteps, which is a nightmare in a regulated industry. A converged platform tackles this head-on by bringing all that data into a unified environment. Instead of moving data around, you can act on it and run AI models in real time, directly on the platform. This slashes the operational overhead and drastically reduces risk because you’re no longer creating countless copies of sensitive information.
You distinguished between internal AI for efficiency and external AI for customer experience. Beyond identifying loan seekers, could you detail another innovative external use case and outline the key metrics a bank should use to measure its impact on customer retention or acquisition?
One of the most innovative areas we’re seeing is in proactive financial wellness. Imagine an AI model that doesn’t just look at a customer’s balance but analyzes their spending patterns, income streams, and even external market signals. It could identify a customer who is consistently spending more than they earn and might be heading toward financial distress. Instead of waiting for a missed payment, the bank could proactively reach out with a personalized offer for a debt consolidation loan, a budget planning tool, or a session with a financial advisor. This completely flips the script from a reactive to a proactive relationship. To measure its impact, you’d look at metrics like the adoption rate of the suggested products or services. For acquisition, you could track the conversion rate of non-customers who engage with a public-facing version of the wellness tool. For retention, the key metrics would be a decrease in loan defaults within the target cohort and, more broadly, an increase in the customer’s Net Promoter Score (NPS) and the number of products they hold with the bank. It’s about demonstrating real value beyond just transactions.
You described the Oracle database as being “stranded” on-premise. Now that your multi-cloud strategy makes it available on AWS, Azure, and Google Cloud, how does this specifically accelerate data consolidation for AI, and could you share an anecdote about a customer’s reaction to this new flexibility?
For decades, the Oracle database was the heart of the financial institution, but it was often locked away in an on-premise data center. As banks moved other applications and workloads to the cloud, this core data became “stranded.” They couldn’t easily connect their cloud-native analytics tools to their most valuable data without complex and slow integrations. This new multi-cloud approach fundamentally changes the game. It allows a bank to run its Oracle database directly on AWS, Azure, or Google Cloud, right alongside its other cloud applications. This eliminates the primary bottleneck for data consolidation. Suddenly, you can feed that rich transactional data directly into your cloud-based AI and machine learning pipelines without delay. I was speaking with a CIO from a regional bank recently, and when we discussed this, his reaction was one of sheer relief. He said, “You mean I can finally stop building custom bridges to get to my own data? This lets my data scientists work where they want to work, with the data they need.” It’s about meeting customers where they are and removing friction, and we’re seeing a tremendous amount of excitement because of it.
You introduced the “medallion architecture” for refining data from bronze to gold. Can you provide a step-by-step breakdown of how a bank would apply this to a raw dataset, detailing the transformations that occur at the silver and gold stages to make it AI-ready?
The medallion architecture is a wonderfully intuitive way to think about data refinement. It’s a journey from raw material to a polished, valuable asset. Let’s say a bank wants to build a customer churn prediction model. The process starts with the Bronze
layer. Here, you’re just ingesting all the raw data as-is—transaction logs, website clickstreams, call center notes, mobile app interactions. It’s messy, often duplicated, and stored in its native format. It’s about capturing everything without losing fidelity.
Next comes the Silver
layer. This is where the real transformation begins. The bank’s data engineers would take that raw bronze data and start cleaning and structuring it. They would filter out the noise, deduplicate customer records, and conform different date formats. For our churn model, they would join the transaction data with the customer’s CRM profile and the call center notes, creating a more cohesive, queryable dataset. The data here is validated and enriched, but it’s not yet tailored for a specific use case.
Finally, we arrive at the Gold
layer. This is the “business-ready” data. For the churn model, data scientists would take the silver data and perform feature engineering, creating highly aggregated and specific attributes like ‘average transaction value over the last 30 days’ or ‘number of customer service calls in the last quarter.’ This gold table is optimized for AI consumption. It’s clean, reliable, and directly feeds the machine learning model to produce actionable insights. Each stage adds significant value, turning a flood of raw data into a strategic asset.
Your advice is to “start somewhere” with a quick win. For a financial institution feeling overwhelmed, what are the first three practical steps it should take to identify a high-value, low-complexity use case and establish the metrics needed to prove its business value?
The feeling of being overwhelmed is completely understandable given the hype around AI. The key is to cut through the noise with a disciplined approach. The first step is to convene a small, cross-functional team of business, technology, and data leaders and ask a simple question: “What is a recurring, high-friction process or an unmet customer need that, if solved, would deliver tangible value?” Brainstorm a list, focusing on problems, not technologies. The second step is to vet that list against two criteribusiness value and complexity. Look for that sweet spot—a use case that promises a clear return but doesn’t require overhauling your entire infrastructure. An internal use case, like automating a compliance check, is often a great place to start. The third, and most critical, step is to define success before you write a single line of code. What is the business value you’re looking to drive? Is it “reduce manual review time by 20%” or “increase customer offer acceptance by 5%?” Create these specific, measurable metrics from day one. By following these steps, you can secure an early victory that builds momentum and credibility for the entire AI program.
You emphasized a holistic approach, looking “beyond the four walls of the bank” for external signals. What are some other powerful external data sources financial institutions are using, and what are the primary challenges they face in securely integrating this data with their own?
Looking beyond your own four walls is absolutely critical for staying competitive. Beyond identifying potential loan seekers, institutions are using a variety of external sources. For example, in commercial lending, they might use supply chain data or shipping manifests to gauge the health of a business. In wealth management, they might analyze market sentiment data and macroeconomic indicators to inform portfolio strategies. Some are even using anonymized geospatial data to understand foot traffic in retail areas to predict economic trends. The challenges, however, are significant. The first is security. Every time you bring in an external data feed, you create a potential new vulnerability. You have to ensure the data is sourced ethically, that you have the right to use it, and that it’s transported and stored with the same rigorous security protocols as your own internal data. The second challenge is integration. External data rarely comes in a clean, ready-to-use format. It has to be validated, cleansed, and harmonized with your internal data schemas. This requires robust data management capabilities to ensure you’re making decisions based on a single, trusted source of truth rather than a messy combination of conflicting information.
What is your forecast for the future of data and AI in financial services?
My forecast is that the line between data strategy and business strategy will completely disappear. In the next few years, the most successful financial institutions will be those that operate as true data-driven organizations from top to bottom. This means moving beyond isolated AI projects and embedding real-time intelligence into every core process and customer interaction. The adoption of multi-cloud architectures will become standard practice, not an option, providing the flexibility and power needed to handle massive, diverse datasets. We will also see a major push toward data democratization, where secure, refined, gold-level data is made accessible to more people within the organization, empowering them to make smarter decisions faster. Ultimately, the future is about creating a holistic, intelligent ecosystem where internal and external data flow seamlessly, fueling AI that not only drives efficiency but also creates deeply personalized and proactive customer experiences.
