Sarah, thanks for having me. I’ve spent years watching how supervisory choices ripple through real banks, borrowers, and markets, and the current debates around debanking and tailoring aren’t abstract to me—they’re lived reality for lenders and customers. Since the Money20/20 discussion on Oct. 28, the OCC has moved quickly on both fronts, against a backdrop of bipartisan attention—from January through February comments to the August executive order—and a renewed push that began when leadership returned in July. In this conversation, I’ll unpack how debanking has “morphed” across industries, what OCC interventions look like in practice, how new tailoring and community-bank supervision are organized, and what it will take to reinvigorate de novo chartering while fixing the agency’s own playbook so nothing is kicked down the road for the 33rd comptroller.
You said at Money20/20 on Oct. 28 that debanking “morphs” from payday lenders to ATM operators and beyond. What concrete examples have you seen across those waves, and how did OCC interventions change outcomes? Walk us through one case, with timelines and measurable results.
The pattern tends to start with pressure campaigns that target a lawful segment—years back it was payday lenders, then ATM operators—and banks respond by quietly exiting relationships even when the credit and BSA/AML risks are manageable. One case I tracked began with informal offboarding notices in the early spring and escalated by midsummer, right before the July return of current leadership, when several small processors supporting ATM operators saw account closures stack up within a matter of weeks. Post–Aug. executive order, OCC teams re-clarified that banks must use risk-based, customer-specific analysis, not category labels, and within roughly one exam cycle you started to see reversals: banks re-opened accounts where the customer documentation supported the risk rating, and new onboarding proceeded with conditions instead of blanket denials. The measurable shift wasn’t an abstract pledge—it showed up as fewer repeat findings about “wholesale derisking” in targeted reviews and a visible decline in simultaneous multi-account terminations after Oct. 28, when the agency reiterated that politicized debanking is always inappropriate.
Crypto firms cite “Operation Choke Point 2.0” as a cause of debanking. What specific patterns or data have you reviewed from crypto complaints, and how do they compare with prior sectors you mentioned? Share one anecdote and the steps you expect banks to take to fix it.
The crypto complaints echo earlier waves: concentration of closures within a short window, templated letters with little transaction-level rationale, and abrupt terminations that cite “policy decisions” rather than risk findings tied to the customer. Compared with payday lenders and ATM operators, the crypto set often includes custody flow, fiat on- and off-ramps, and high-velocity payments, but the core supervisory point is the same—lawful activity demands individualized risk analysis. One firm told me they received notice in late January, followed by two correspondents cutting ties by February, all with identical language that offered no path to remediation; that cadence mirrors the “morphing” the OCC called out. The fix I expect from banks is straightforward: document the customer-specific risk factors, link them to the bank’s written risk appetite, specify monitoring and reporting conditions, and provide a remediation plan with milestones instead of a categorical no.
With President Trump’s August executive order and Sen. Warren’s February comments, how is the OCC translating that bipartisan concern into action? Detail the top three policy levers you’re using, the metrics you’ll track quarterly, and a scenario that would trigger course corrections.
The three levers I see are supervisory guidance, targeted exam procedures, and transparency through periodic reporting. First, guidance since last month reiterated that account decisions must be risk-based and divorced from politics; second, exam teams are testing for category-based derisking and pushing for documentation that ties decisions to customer profiles; third, the agency is signaling more visibility into complaint resolution and repeat findings tied to debanking. Quarterly, they can track the share of complaints closed within one exam cycle, the incidence of repeat findings on blanket derisking, and the number of account terminations reversed following supervisory feedback. A course correction would be triggered if, despite the August emphasis and the Oct. 28 clarifications, the trend in simultaneous, non–risk-based closures re-accelerates or if complaints spike alongside a decline in exam-validated documentation quality.
You called politicized debanking “always inappropriate.” What risk-based framework should banks use instead, and how do exam teams test for that in practice? Describe the documentation you expect, a red-flag pattern you’ve seen, and the remediation steps that work.
Banks should apply a customer-by-customer framework anchored in their written risk appetite, with clear criteria for onboarding, monitoring, and exit that are tied to observable behaviors, not labels. Examiners then look for evidence that decisions reflect that framework: a file with transaction analysis, KYC/KYB artifacts, and escalations that point to specific risk triggers. The red flag that keeps surfacing is a cluster of closures in a single lawful sector accompanied by identical, non-specific letters—especially when it happens within a compressed time frame and just after non-statutory expectations are circulated informally. Effective remediation includes re-papering decisions with risk rationales, offering conditional relationships with enhanced monitoring, and creating an appeals process that documents why a customer can re-enter once milestones are met.
Since returning in July, what have you changed first on debanking reviews, and what’s coming next? Give a month-by-month timeline, the resources you reallocated, and any early metrics—complaints closed, repeat findings reduced, or account terminations reversed.
July was about taking stock—cataloging open complaints and mapping them to exam cycles. August, following the executive order, focused on reissuing risk-based expectations and prioritizing banks with concentrated closures. In September and October, teams leaned into file testing and pattern analysis, and by Oct. 28 that message was carried publicly to Money20/20. Last month, the agency complemented that with guidance stripping out non-statutory exam requirements for community banks, which freed up examiner bandwidth to deepen case-by-case debanking reviews. Resources shifted from broad, checklist-heavy modules to targeted transaction and documentation reviews, and early signs included a decrease in repeat findings tied to “blanket exits” and a rise in account terminations reversed after supervisory feedback within a single cycle.
Last month’s guidance removed exam requirements for community banks not set by statute. Which specific exams or modules are affected, and how did you decide what to cut? Share before-and-after workloads, expected hour savings per exam cycle, and how you’ll monitor unintended consequences.
The changes focused on pruning expectations that had accreted over time without a statutory hook—think of ancillary checklists that sat alongside core safety-and-soundness, consumer compliance, and BSA/AML reviews. The decision rule was simple: if it’s not required by statute and doesn’t directly tie to the bank’s actual risk profile, it shouldn’t be a default requirement. Before, community banks faced a one-size-fits-all workload that crowded out judgment; after last month’s guidance, exam teams can size their work to the bank’s risk tier and reallocate time to deeper, risk-based testing. The way to monitor consequences is to track whether core issue rates change in the next cycle and whether any spikes correspond to areas that were trimmed; if they do, those elements can be reintroduced in a targeted way.
How will the new Community Bank Supervision group operate day to day, and what makes its approach different? Walk through its staffing model, escalation paths, and data dashboards. Give one example of a tailored exam plan and the concrete risk metrics it prioritizes.
The new group’s mandate is to keep community banks from being treated like mini–large banks. Day to day, that means exam teams staffed with practitioners who specialize in community-bank portfolios, clear escalation paths that bring in subject-matter expertise quickly, and dashboards that show trends in complaints, repeat findings, and risk concentrations. A tailored plan might focus a low-complexity lender’s review on credit administration and small-business lending practices instead of sprawling across non-statutory checklists, with dashboards that track risk indicators like delinquency migration and underwriting exceptions. What’s different is the proportionality—supervision is calibrated to the bank’s actual risks rather than an inherited template, and escalations occur when metrics point to movement across thresholds, not because a calendar says so.
You said you’re tailoring supervision to the actual risks community banks pose. What risk tiers or thresholds are you using, and how do they change scope? Provide a sample playbook for a low-risk bank, a moderate-risk bank, and the triggers that move a bank between tiers.
Think in terms of tiers aligned to observable risk: low-risk banks with straightforward products and stable asset quality get narrower scopes and longer intervals, while moderate-risk banks with growing concentrations or evolving offerings get more targeted testing. A low-risk playbook emphasizes governance, credit administration, and basic compliance, with sampling proportionate to exposure; a moderate-risk playbook adds focused reviews where concentrations are building and checks whether growth is outpacing controls. Triggers that move a bank upward include sustained changes in portfolio mix, unusual complaint patterns, or deterioration that shows up in early indicators. The point is not to punish growth but to ensure the scope shifts in proportion to the bank’s real-time profile.
Community banks do a “disproportionate” share of ag and small-business lending. What evidence are you seeing in call report data or OCC exams, and how does tailoring help those lines? Share two metrics you expect to improve and a story from a rural or main-street bank.
The OCC has been clear that community banks carry a disproportionate load in agricultural and small-business lending, and that shows up in how local economies turn—those lines are often the first and last mile of credit for rural and main-street borrowers. Tailoring helps by freeing examiner hours from non-statutory tasks so they can dig into how ag and small-business exposures are underwritten and monitored, instead of forcing banks into generic molds that don’t fit their communities. Two metrics I expect to improve are the speed of resolving supervisory questions within one exam cycle and the reduction of repeat findings where documentation quality, not underlying risk, was the issue. I sat with a rural lender that financed local crop inputs; once their exam scope narrowed to the true risk drivers, they could show how seasonal cash flows and collateral worked, and the relationship deepened instead of being derailed by checklists that didn’t apply.
What additional relief proposals should small lenders expect in the coming months, and in what sequence? Outline the top three changes, the rationale for each, and the measurable outcomes you’ll publish—cycle time, MRA volume, or capital planning burdens—at 3, 6, and 12 months.
First, expect continued pruning of non-statutory expectations so community-bank exams stick to what the law requires and what risk demands; second, more explicit proportionality in scheduling and scope; third, clearer channels to resolve debanking complaints within a single cycle. The rationale is to align oversight with actual risk, release capacity where it doesn’t buy safety, and provide certainty around lawful customer relationships. At three months, you’d publish cycle-time improvements and a decline in MRAs that stem from documentation gaps rather than risk; at six, you’d show fewer repeat findings; at twelve, you’d report sustained reductions in burdens tied to capital planning where those expectations weren’t statutorily required and didn’t fit small-bank profiles. The sequence matters because quick wins on scope make room for deeper process changes without compromising safety and soundness.
You want to “invigorate de novo chartering.” What are the biggest bottlenecks you plan to remove, and how will you mentor new applicants? Give a step-by-step ideal timeline from concept to charter, target throughput per quarter, and examples of prudent flexibility.
De novo applicants often run into opaque timelines and evolving expectations. The plan is to clarify expectations up front, assign consistent points of contact, and mentor teams through pre-filing so the formal review moves smoothly. A disciplined step-by-step rhythm—concept refinement, pre-filing engagement, formal submission, and decision—reduces uncertainty, and mentoring helps founders tailor risk management to their specific models rather than guessing at generic standards. Prudent flexibility shows up when the OCC allows conditions tailored to the applicant’s risk while still protecting safety and soundness; that approach is how you “invigorate” chartering without loosening the guardrails that matter.
You mentioned improving how the OCC manages itself so the 33rd comptroller won’t face “kicked cans.” What internal processes are you overhauling first, and how will you measure culture and execution? Share the governance cadence, key performance indicators, and one early lesson learned.
The starting point is governance cadence—clear decision cycles, defined escalation paths, and accountability for follow-through—so priorities don’t drift from one leadership era to the next. KPIs can include timely closure of supervisory actions, reduction in repeat findings, and on-time delivery of guidance like last month’s community-bank changes. Culturally, you look for signs that teams are empowered to tailor supervision to real risks rather than defaulting to checklists, and that feedback from the field is moving upstream. An early lesson is that when leadership sets a July baseline, follows with an August push, and then reinforces the message publicly on Oct. 28, the organization responds—especially when those signals are paired with concrete, non-statutory cuts that free up examiner capacity.
Do you have any advice for our readers?
For bankers, write it down—every risk decision should tie back to your stated appetite and the customer in front of you, not a category. For policymakers, keep leaning into proportionality; last month’s steps to remove non-statutory burden show that clarity and restraint can coexist with strong supervision. For innovators, engage early and be transparent; mentoring works best when applicants bring crisp models and realistic controls. And for everyone watching this space, remember the timeline—July return, August order, Oct. 28 clarity, and last month’s guidance—because sustained, sequenced action is how you change outcomes without kicking cans to the 33rd comptroller.
