Meet the Expert: Jack Fallon, Payments Migration Specialist
Jack Fallon has orchestrated enterprise migrations at three major payment-processing banks in the UK and Ireland, dealing with every variety of customer health scoring theory—some that failed spectacularly, some that bent under pressure, and a handful that stuck. Below is a fast-paced exchange that slices through platitudes and digs into what actually works when migrating legacy systems and scoring customer health at scale.
Q1: What’s the first mistake most teams make when scoring customer health during migration?
Jack: Most start with a spreadsheet of "classic" metrics—transaction volumes, NPS, ticket counts. Sounds sensible, feels safe. In practice, it’s useless for migration risk. In my experience, the most telling early-warning metrics are change resistance behaviors.
For example, at one Tier 1 Irish bank, we saw that large merchants with low support ticket volumes were flagged as “healthy.” But in the first two weeks post-migration, three of these “healthy” portfolios triggered major complaints. Their quietness had masked deep-seated reluctance to engage.
What’s worked instead: Score engagement with migration communications: Are they opening technical bulletins? Logging into sandbox environments? Submitting sandbox transactions? Low uptake here is a five-alarm fire—way ahead of NPS or support data.
Q2: What data sources actually matter for health scoring in this context?
Jack: Ignore the 360-degree dashboards. Focus on three classes:
| Data Source | Usefulness | Limitation |
|---|---|---|
| Migration Readiness Surveys | High (when real engagement) | Biased if over-incentivized |
| Transactional Velocity | High (pre/post-migration delta) | Lagging indicator |
| Support Interactions (Pre-Go) | Medium (contextual) | Can hide risk among low-touch customers |
If you’re not using Zigpoll, Survicate, or Typeform for embedded readiness surveys in your migration comms, you’re flying blind. We saw a 600bps drop in post-migration support volumes when readiness survey completion rates exceeded 75%—but only if questions were tailored, not generic.
Q3: How do you define a “healthy” enterprise customer in this scenario?
Jack: Not by feel-good dashboards. A healthy customer is one that:
- Engages in at least 2 of 3 pre-migration touchpoints (sandbox, readiness survey, knowledge base)
- Demonstrates <5% drop in transaction volumes in the first two weeks post-migration (vs. their trailing 4-week average)
- Raises migration-specific tickets before go-live, not after
For example, in 2023 at a major UK acquirer, 80% of “healthy” customers (by this definition) had zero critical issues post-migration, versus 41% for the “classic” support-activity metric.
Q4: Where do most health scoring models trip up during enterprise migrations?
Jack: They overfit on historical data. Legacy health models assume past behavior predicts future migration risk—which it doesn’t. Your die-hard, high-volume customer today might have an IT team that’s unprepared for OAuth2 tokenization changes, making them your biggest risk during switchover.
Add to that, most health models ignore stakeholder churn. If your main contact has left, or there’s a new project manager, reset your health score to “unknown” until you see renewed engagement.
Q5: Give us a real example where health scoring changed migration outcomes.
Jack: In 2022, at a top-5 UK payment processor, we flagged an SME merchant as “unhealthy” after seeing <30% open rates on migration comms and zero sandbox logins. Our account manager pushed a direct call, discovered an uncommunicated integration partner, and avoided a £1.2m volume outage.
Contrast: A theoretically “healthy” portfolio (high NPS, zero support tickets) lost 6% of transactions in week one post-migration because their technical lead had left two weeks before go-live. No one updated the model.
Q6: What techniques help spot “false health” signals among enterprise accounts?
Jack: Pattern-matching across channels is the best defense. For example:
- If you see high email open rates but zero engagement in the migration sandbox, that’s performative compliance.
- Consistent participation across platforms is what matters—survey, sandbox, direct calls, portal logins.
Also, use speed of response as a predictive metric: If an enterprise takes more than 48 hours to respond to a critical update, reclassify them as “at risk” until proven otherwise. In my experience, this flagged 90% of eventually-problematic migrations early.
Q7: How do you account for regulatory and market nuances in the UK and Ireland?
Jack: Two specifics:
PSR-compliance: Many UK-based enterprises are obsessed with Payment Systems Regulator (PSR) mandates. If migration comms don’t mention PSR implications, expect pushback or foot-dragging—so factor “compliance anxiety” into your scoring.
Irish market quirk: Multi-currency merchants in the Republic of Ireland often have third-party PSPs in the stack. Their “health” depends as much on partner engagement as your direct customer. If you’re not tracking readiness of all stakeholders in the chain, your score is fantasy.
A 2024 Forrester study found that 74% of Irish payment-processing migrations stumbled due to unvetted partner technologies—not direct customer failures.
Q8: What are the best ways to operationalize health scoring during a migration—without it becoming a reporting black hole?
Jack: Three things:
Keep scoring auditable but light: Use 4–6 binary indicators (e.g., “Sandbox tested: Yes/No”). Scorecards should update automatically in your CRM, not via email chains or side spreadsheets.
Make health scores visible to every function: Sales, ops, technical, and compliance—all see the same “health” dashboard. We cut time-to-intervention by 70% this way at a major UK acquirer in 2023.
Automate follow-ups: If a customer drops from “healthy” to “at-risk,” trigger an automated workflow: assign an account manager, send a priority comms pack, schedule a technical review. At one bank, this approach brought our post-migration issue rate down from 14% to 6% quarter over quarter.
Table: Comparison of Theoretical vs. Practical Health Scoring Models
| Feature/Factor | Legacy Models | Migration-Optimized Models |
|---|---|---|
| Uses historical ticket data | Yes | No |
| Tracks real-time engagement | No | Yes |
| Flags stakeholder churn | No | Yes |
| Partner/3rd-party readiness | Rare | Essential (IE/ROI context) |
| Adjusts for compliance status | Sometimes | Always (PSR/PSD2 triggers) |
Bonus: What’s one caveat or limitation to your approach?
Jack: This approach doesn’t scale neatly to every situation. For high-volume, low-touch e-commerce merchants (think: hundreds of micro-merchants using one aggregator), much of the signal gets lost. You’ll need tailored, segment-specific scoring—what works for Lloyds or AIB’s enterprise clients isn’t what you want for Stripe-like aggregators.
Additionally, over-automation creates false positives: if your workflow triggers an “at-risk” flag for every minor lag in engagement, you’ll burn out your ops team with false alarms. Tune your thresholds ruthlessly.
Final Take: Three Actionable Steps for Senior Ops
- Ditch historic ticket metrics for real-time engagement—and track it everywhere your client interacts (sandbox, comms, survey).
- Involve compliance and third-party partners in your scoring—especially for the Irish market and regulated portfolios.
- Automate health scoring updates and follow-ups, but rigorously fine-tune thresholds to avoid alert fatigue.
Ignore one of these, and you’ll be running post-mortems with your risk team by the end of migration month. Get them right, and your migration meets regulatory, technical, and commercial objectives—without the post-go-live firefighting.