In a world buzzing with AI hype, Priya Jaiswal stands as a voice of seasoned clarity. A recognized authority in Banking, Business, and Finance, she has built a formidable reputation by cutting through the noise to focus on tangible value and strategic foresight. In this conversation, she deconstructs the vague promises of AI-driven efficiency that dominate boardrooms today. We explore the critical difference between using AI for marginal gains versus true business model reinvention, the tell-tale signs of a genuine expert in a market flooded with pretenders, and the leadership imperative for both banking and fintech executives to navigate what she calls the biggest paradigm shift of our lifetime.
A bank COO recently announced a “6% operational uplift” from AI, earning applause. How would you deconstruct that vague claim, and what specific metrics like error reduction or unit cost changes should leaders demand to see genuine, sustainable value beyond marginal gains? Please share some examples.
That announcement is a perfect example of the disconnect between boardroom presentations and on-the-ground reality. Having spent the first part of my career in process re-engineering, I can tell you that “operational uplift” is not a real metric; it’s a convenient, catch-all phrase. It sounds impressive, but it’s hollow. What does it actually mean? Is it a 6% reduction in costly errors that require rework? That would be significant. Or is it a 6% reduction in delays that were frustrating customers? Also valuable. But if it’s just a marginal productivity gain at the process level, it’s incredibly fragile. If Clive in the queries department starts taking longer smoke breaks, that entire 6% uplift could be wiped out. A true leader should be asking to see the hard numbers: show me the change in unit cost, the reduction in risk exposure, or the improvement in our SLAs. Without that, you’re just applauding a number that feels good but might not mean anything tomorrow.
Many firms use AI for incremental improvements, effectively ‘containing’ it within existing processes. What is the strategic risk of this approach, and how does it prevent leaders from grappling with AI’s potential to fundamentally enhance or even obliterate their core business model? Please provide a specific example.
Containing AI is the most common mistake I see, and it’s profoundly dangerous. The strategic risk is that you become so focused on optimizing the deck chairs that you don’t see the iceberg. When you use AI simply to achieve a small efficiency gain, like helping a developer improve their code commits or an analyst write better user stories, you’re treating it like just another software tool. Frankly, nobody outside your IT department cares what tools you use. Your customers, your investors, they don’t care about your internal processes. The real question, the one this incremental approach allows you to avoid, is this: what can AI do to my entire business vertical? This technology has the power to undermine, enhance, or completely obliterate your core offering. By focusing on a 6% gain in account maintenance, you avoid the terrifying but necessary work of considering if AI could make your entire account maintenance division obsolete. It’s a comfortable, but ultimately self-defeating, strategy.
The market is full of newly minted “AI consultants.” What are the critical red flags that a supposed expert may lack deep knowledge, and how can a leader distinguish them from a true expert whose insights might be humbling but are essential for navigating this shift?
The red flags are often hiding in plain sight. You have to look at their history. Was this person working in sales at an open banking startup 18 months ago? Were they a senior architect who just got laid off and suddenly rebranded? This space didn’t just appear overnight; people have been doing serious research and building in this field for decades. A true expert won’t just tell you what you want to hear. Their insights will likely be humbling. They will challenge your assumptions and force you to confront uncomfortable truths about your business. That’s how you know you’ve found them. The fake experts will sell you a slick presentation and a product that, if you scratch the surface, is just a thin layer of window dressing on an off-the-shelf AI model. A real expert will start by asking you the hard questions, not by giving you easy answers.
For leaders in banking and fintech, the challenge is no longer just observing disruption but facing it directly. What initial, practical steps should they take to truly understand AI’s impact on their specific vertical, and how can they build the leadership capacity to act on that understanding?
The first step is to have the right conversations, and that means deliberately stepping outside your usual circle of advisors. Don’t just pull in your traditional consulting partner because you have some service credits to use up. Don’t rely on your CTO, who is already swamped with a dozen other priorities. And please, don’t listen to Steve who used to sit across from you and is now the founder of some “AI-powered” something. You need to seek out the people who have been living and breathing this for years. Have the conversations that matter, the ones that are humbling and force you to fundamentally question your business. Building leadership capacity starts with this genuine, deep understanding. For years, we’ve lambasted banking leaders for their slow response to digital, but now the shoe is on both feet. Fintech leaders are facing the same existential questions, and the first step toward acting correctly is understanding profoundly.
You suggest that treating AI as another “tick-box” exercise repeats past mistakes. Could you elaborate on the heavy legacy left by previous tech waves that were handled this way, and how can organizations ensure this paradigm shift results in genuine transformation rather than squandered potential?
Absolutely. The “tick-box” mentality is what created the technological mess many banks are still trying to untangle today. When mobile banking or online services emerged, organizations that treated them as a checkbox simply bolted on a new feature without rethinking the underlying processes. This left us with a heavy legacy of siloed systems, fragmented customer experiences, and immense technical debt. We see it everywhere. To avoid repeating this, leaders must recognize that AI is not just another project to be managed; it is the single biggest paradigm shift of our lifetime. The goal isn’t to be able to say, “we’re doing AI.” The goal is to fundamentally transform how you create and deliver value. This requires a cultural shift away from project-based thinking and toward a continuous, strategic evolution of the entire business model. Otherwise, you’re just squandering the potential of a revolutionary technology for an advantage that can be lost if someone takes a longer lunch break.
What is your forecast for the financial services industry over the next five years as AI adoption moves from marginal productivity gains to fundamental business model reinvention?
My forecast is that we’re going to see a great divergence. On one side, you’ll have the institutions—both banks and fintechs—that continue to chase those marginal, 6% productivity gains. They will use AI to do the same old things a little bit faster or cheaper, and they will become progressively less relevant. On the other side, you’ll have the players who truly grapple with AI’s potential to reinvent their core business. They will move beyond optimizing existing processes and start creating entirely new ways to manage risk, serve customers, and structure financial products. The winners won’t be the ones who are just “doing AI,” but the ones who are fundamentally rethinking what it means to be a financial services company in an AI-native world. The next five years will be less about incremental improvement and more about a complete re-imagining of the industry’s foundation.
