Today we’re speaking with Zachery Anderson, who has just taken on one of the most significant data roles in global finance as the new Chief Data and Analytics Officer at JP Morgan Payments. With a fascinating background that spans over a decade in the fast-paced world of gaming at Electronic Arts and, more recently, leading AI and open banking at NatWest, Zachery brings a unique perspective to an institution that processes an astounding $12 trillion daily. This conversation will explore how he plans to bridge these different worlds, applying lessons from gaming to high-stakes finance, the immense challenge of innovating with AI while maintaining near-perfect reliability, and his strategic vision for the future of payments data.
At NatWest, your role included open banking and personalization. Moving to a firm that processes $12 trillion daily, how will you adapt those experiences? Please outline the strategic differences you foresee in applying AI to such a massive, high-stakes operational scale.
That’s the central question, isn’t it? At NatWest, the focus was very much on the individual customer experience through open banking and personalization. Here, the scale is astronomical. When you’re handling $12 trillion a day, the strategy shifts from individual-level personalization to system-level intelligence and optimization. It’s less about suggesting a new savings product and more about detecting a potential anomaly in a multi-billion dollar transaction flow that could have systemic implications. The core principles of using data to create value remain, but the application of AI moves from a customer-centric lens to one focused on systemic integrity, efficiency, and security at a global scale.
You spent over a decade at Electronic Arts before entering finance. What specific data-handling lessons or cultural mindsets from the fast-paced gaming world do you believe are most transferable to financial services? Could you share a particular technique you hope to implement?
My 12 years at EA were a masterclass in understanding real-time data and dynamic user behavior. In gaming, you’re constantly A/B testing, iterating, and analyzing telemetry data from millions of players to improve engagement in real-time. That agile, test-and-learn mindset is something I believe is incredibly valuable for finance. One specific technique is building ‘digital twins’—simulated environments of our payment systems. In gaming, we’d use these to test new features without breaking the live game. Here, we can use them to model the impact of new AI algorithms or market stressors on that massive $12 trillion flow, allowing us to innovate safely without ever touching the live operational environment.
You’ve highlighted the challenge of balancing extreme reliability with AI innovation. What are the first practical steps your team will take to explore new AI capabilities without compromising that ‘space flight’ level of trust? Can you describe the governance framework you envision for this?
That “space flight” level of trust is our north star; operating at 99.9999% reliability is non-negotiable. The first step is to create sandboxed environments that are completely isolated from our core processing systems. This is where my team will experiment with cutting-edge AI models. The governance framework I envision is a multi-stage, gated process. An AI model must first prove its value and stability in the sandbox, then move to a parallel, non-live environment where it runs on real data but can’t execute transactions. Only after succeeding at every stage and passing rigorous ethical and bias reviews will we even consider a limited, heavily monitored pilot in the live system. It’s about building a culture of responsible innovation where we can push boundaries without ever risking the foundation of trust.
Your roles extend to Smart Data Foundry and Wharton Executive Education. How do these external perspectives from a data-focused startup and academia directly influence your corporate strategy? Please share an example of how insights from one role might solve a challenge in another.
These external roles are crucial for preventing institutional tunnel vision. Smart Data Foundry keeps me connected to the nimble, startup way of thinking about data for public good, while Wharton provides a deep, academic rigor to my strategic frameworks. For example, a discussion at Wharton about the macroeconomic implications of real-time payment systems might directly inspire a new analytical model we can explore at JP Morgan. Conversely, a practical challenge we face in managing data at the scale of $12 trillion a day could become a fantastic, anonymized case study for a discussion with Wharton faculty, leading to novel academic insights that we can then bring back and apply. It creates a powerful feedback loop between theory and practice.
As you take charge of the payments data and AI strategy, what are your top one or two priorities for the first year? Could you walk me through the key metrics you will use to measure progress and explain why those specific indicators are the most critical?
My top priority is to establish that robust, secure framework for AI experimentation I mentioned earlier. Without a safe place to innovate, we can’t move forward. A close second is enhancing our real-time monitoring and predictive analytics capabilities. The most critical metric for the first priority will be the “time-to-safe-experiment”—how quickly my team can test a new idea in a secure environment. For the second, it will be “predictive accuracy rate,” specifically our ability to forecast and flag potential operational issues before they occur. These metrics are vital because they directly measure our capacity to innovate responsibly while simultaneously strengthening that 99.9999% reliability standard.
What is your forecast for the role of AI in global payments over the next five years?
Over the next five years, I see AI becoming the central nervous system of global payments, moving beyond its current role in fraud detection into something far more proactive and intelligent. It will be the engine that drives predictive liquidity management for corporations, automates complex cross-border compliance, and offers truly dynamic routing to optimize transaction speed and cost on a global scale. The biggest shift will be from reactive AI, which spots problems after they happen, to predictive AI, which will anticipate market shifts, network congestion, and even geopolitical risks to keep the massive flow of capital moving seamlessly and securely. It won’t just be about making payments faster; it will be about making the entire financial ecosystem smarter and more resilient.
